I can specify tables and fill them with data, but I don't know how to overwrite existing fields.
I'm using the InfluxDB plugin using the following code to send data to a specific table in the influx database, as per documentation:
def myFields1 = [:]
def myFields2 = [:]
def myCustomMeasurementFields = [:]
myFields1['field_a'] = 11
myFields1['field_b'] = 12
myFields2['field_c'] = 21
myFields2['field_d'] = 22
myCustomMeasurementFields['series_1'] = myFields1
myCustomMeasurementFields['series_2'] = myFields2
myTags = ['series_1':['tag_a':'a','tag_b':'b'],'series_2':['tag_c':'c','tag_d':'d']]
influxDbPublisher(selectedTarget: 'my-target', customDataMap: myCustomMeasurementFields, customDataMapTags: myTags)
So I can define fields, assign values, assign them to tables (series_1,series_2). But how can I overwrite existing fields in a table that already exists? Thank you.
There is only one way to overwrite values in InfluxDB.
Value field which you write into InfluxDB can vary from original data only by its value. No difference with tag keys, measurement name and timestamp can be made. If you write new value as I described, it will automatically overwrite value you had there before.
In other cases you need to delete old data and then write new values or just accept existing old data and use filtering possibilities of InfluxDB.
By the way. Thinking about InfluxDB as table data generates problems like yours. My advice is to think about it as an indexed time series database ( it is truly time series database).
Related
I am trying to add new column to an existing mnesia table. For that, I use following code.
test()->
Transformer =
fun(X)->
#users{name = X#user.name,
age = X#user.age,
email = X#user.email,
year = 1990}
end,
{atomic, ok} = mnesia:transform_table(user, Transformer,record_info(fields, users),users).
Two records I have
-record(user,{name,age,email}).
-record(users,{name,age,email,year}).
My problem is when I get values from my user table it comes as
{atomic,[{users,sachith,28,sachith#so,1990}]}
Why do I get users record name when I retrieve data from user table?
The table name and the record name are not necessarily the same. You started out with a table called user holding user records, and then you transformed all the user records into users records. So when you read from the table, it will return users records, since that's what the table now contains.
If you look at the internal representation of record,
-record(Name, {Field1,...,FieldN}). is represented by {Name,Value1,...,ValueN}.
So, basically you are converting {user,name,age,email} to {users,name,age,email,year} in your table.
But there is better approach for such migration which will come in handy as you update your records later,
Looking at this production codebase a better snippet for the transformer function,
%%-record(user,{name,age,email}). // old
%%-record(user,{name,age,email,year}). // new
Transformer =
fun(X)->
#user{name = element(2,X),
age = element(3,X),
email = element(4,X),
year = 1990}
end,
So I have a model Item that has a huge postgresql JSON field called properties. Most of the time this field does not need to be queried on or changed, however we store price in this field.
I'm currently writing a script that updates this price, but there's only a few unique prices and thousands of Items so in order to save time I have a list of Items for each unique price and I'm attempting to do an update all:
Item.where(id: items).update_all("properties->>'price' = #{unique_price}")
But this gives me:
syntax error at or near "->>"
Is there a way to use update all to update a field in a postgres JSON field?
You need to use jsonb_set() function, here is an example:
Item.where(id: items).
update_all(
"properties = jsonb_set(properties, '{price}', to_json(#{unique_price}::int)::jsonb)"
)
This would preserve all values and update only one key.
Read documentation
You can also do this
Item.where(id: items).each do |item|
properties = item.properties
item.update(properties: properties.merge({
price: unique_price
}))
end
The keyword merge will override the value of the key provided with the new value ie unique_price
Merge documentation is here
What I came up with based on #Philidor's suggestion is very similar but with dynamic bindings:
assignment = ["field = jsonb_set(field, '{ name_of_the_key }', ?)", value.to_json]
scope.update_all(scope.model.sanitize_sql_for_assignment(assignment))
I'm using Rails to search through a SQLite table (for other reasons I can't use the standard database-model system) using a SELECT query like so:
info = ActiveRecord::Base.connection.execute("SELECT * FROM #{form_name} WHERE EmailAddress = \"#{user_em}\";")
This returns the correct values, but for some reason the output is in duplicate, the difference being the 2nd set doesn't have column titles in the hash, instead going from 0-[num columns]. For example:
{"id"=>1, "Timestamp"=>"2/27/2017 14:26:03", "EmailAddress"=>"-snip-", 0=>1, 1=>"2/27/2017 14:26:03", 2=>"-snip-"}
(I'll note the obvious- there's only one row in the table with that information in it)
While it's not exactly a fatal problem, I'm interested as to why it's doing so and if it's possible to prevent it. Thanks!
This allows you to read the values both by column index or column name:
id = row[0]
timestamp = row["Timestamp"]
I am using influxDB and using line protocol to insert large set of data into Data base. Data i am getting is in the form of Key value pair, where key is long string contains Hierarchical data and value is simple integer value.
Sample Key Value data :
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/allocations
value = 500
/path/units/unit/subunits/subunit[name\='NAME2']/memory/chip/application/filter/allocations
value = 100
(Note Name = 2)
/path/units/unit/subunits/subunit[name\='NAME1']/memory/chip/application/filter/free
value = 700
(Note Instead of allocation it is free at the leaf)
/path/units/unit/subunits/subunit[name\='NAME2']/memory/graphics/application/filter/swap
value = 600
Note Instead of chip, graphics is in path)
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/size
value = 400
Note Different path but till subunit it is same
/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free
value=100
Note Same path but last element is different
Below is the line protocol i am using to insert data.
interface, Key= /path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free, valueData= 500
I am Using one measurement namely, Interface. And one tag and one field set. But this DB design is causing issue for querying data.
How can I design database so that i can query like, Get all record for subunit where name = Name1 or get all size data for every hard disk.
Thanks in advance.
The Schema I'd recommend would be the following:
interface,filename=/path/units/unit/subunits/subunit[name\='NAME2']/harddisk/data/free value=500
Where filename is a tag and value is the field.
Given that the cardinality of filename in the thousands this schema should work well.
I have just started working with py2neo and neo4j.
I am confused about how to go about using indices in my database.
I have created a create_user function:
g = neo4j.GraphDatabaseService()
users_index = g.get_or_create_index(neo4j.Node, "Users")
def create_user(name, username, **kwargs):
batch = neo4j.WriteBatch(g)
user = batch.create(node({"name" : name, "username" : username}))
for key, value in kwargs.iteritems():
batch.set_property(user, key, value)
batch.add_labels(user, "User")
batch.get_or_add_to_index(neo4j.Node, users_index, "username", username, user)
results = batch.submit()
print "Created: " + username
Now to obtain users by their username:
def lookup_user(username):
print node(users_index.get("username", username)[0])
I saw the Schema class and noticed that I can create an index on the "User" label, but I couldn't figure out how to obtain the index and add entities to it.
I want it to be as efficient as possible, so would adding the index on the "User" label add to performance, in case I were to add more nodes with different labels later on? Is it already the most efficient it can be?
Also, if I would want my username system to be unique per user, how would I be able to do that? How do I know whether the batch.get_or_add_to_index is getting or adding the entity?
Your confusion is understandable. There are actually two types of indexes in Neo4j - the Legacy Indexes (which you access with the get_or_create_index method) and the new Indexes (which deal with indexing based on labels).
The new Indexes do not need to be manually kept up to date, they keep themselves in sync as you make changes to the graph, and are automatically used when you issue cypher queries against that label/property pair.
The reason the legacy indexes are kept around is that they support some complex functionality that is not yet available for the new indexes - such as geospatial indexing, full text indexing and composite indexing.