I have a data structure which is like this (the real has more nodes levels):
-------Node 1------
| |
| |
Node A: Node B:
-element 1A1 -element 1B1
-element 1A2 -element 1B2
Each element is identified by parents IDs. Each node and element may store or may don't store some value. The values are inherited from parents. So when I want to find value for 1A2, I:
1) check if value for 1A2 exists
2) if not, check if value for A exists
3) if not, check if value for 1 exists
4) save the first found value
The structure is stored in database and it's much more complex. So the problem is that database queries are too slow. But this structure doesn't change often, so I decided to build a server-side cache for it. The cache is removed after any changes in the structure and is rebuild when there is the first attempt to read value of some element. The problem is that, when the cache keys are like this:
"1-A-1" for 1A1 value
"1-A-2" for 1A2 value
"1-A" for A value
"1" for 1 value
1A has value and 1A1 doesn't have value but 1A2 does, the first cache stored is "1-A". Then when I try to find value for 1A2, I first search through the cache and there I find the key "1-A" which fits for my element as these are right nodes. So I don't make any query to database as I assume, the value found in the cache is right. But it is not.
How can I solve this problem? Is there any solution? I want to make the least queries as possible, but I always want to find the exact value for given element.
Related
I have a etc table ‘table’ as {key,[val1,val2]}
I selected this data from the table using
ets:select(table,[{{‘$1','$2'},[],['$$']}]).
[[key,["val1",<<"12">>]],
[key,["val2",<<"6">>]],
[key,["val3",<<"16">>]]]
I want to delete a entry matching the part [val1,val2] using this
ets:select_delete(table,[{{‘$1','$2'},[{'==','$2',["val1",<<"12">>]}],['$$']}]).
0
But still when I run select again I get
ets:select(table,[{{‘$1','$2'},[],['$$']}]).
[[key,["val1",<<"12">>]],
[key,["val2",<<"6">>]],
[key,["val3",<<"16">>]]]
How can I delete this entry based on the non key part?
The ets:select_delete documentation says:
The match specification has to return the atom true if the object is to be deleted. No other return value gets the object deleted. So one cannot use the same match specification for looking up elements as for deleting them.
So try this:
ets:select_delete(table,[{{'$1','$2'},[{'==','$2',["val1",<<"12">>]}],true}]).
ets:select_delete returns the number of records it deleted, so hopefully it should return 1 this time.
I have to insert incremental values in a column of the table using Fitnesse. The incremental value I'll get from a stored procedure which returns the last inserted value. So I have to increment the value and store it.
For example: I'll get a value from the stored procedure output. And I have to increment the value by 1 and insert into the table.
Any ideas?
Output from stored procedure is like : ACRDE0001 (PK)
Value to store in table : ACRDE0002, ACRDE0003, .....
Expected output
!|insert|table1|
|col1|col2|col3|
|ACRDE0001|abc|def|
|ACRDE0002|abc|def|
|ACRDE0003|abc|def|
.
.
.
.
As far as I'm aware the only way to change (e.g. increment) a value you get during your test is by writing some code in a fixture. There is a pull request to allow more dynamic Slim expression directly in the wiki, but that has not been merged (let alone released) yet.
Your questions suggests that the value is something you get from a database and that you then want to send back the generated/incremented value with new records you insert. In that case I wonder whether the increment is actually that useful to actually have in your wiki (your test case is not about the generated values, is it?).
Maybe your fixture could just retrieve the initial value (or have it supplied as constructor value) and the fixture could generate the a new value for each row and send them to the database.
Considering a simple table:
CREATE TABLE transactions (
enterprise_id uuid,
transaction_id text,
state text,
PRIMARY KEY ((enterprise_id, transaction_id))
and Solr core with default, auto-generated parameters.
How do I construct a Solr query that will find me record(s) in this table that have state value exact match to an input, considering the state can be arbitrary string?
I tried this with state value of a+b. This works fine with q=state:"a+b", but that creates a "phrase query":
"rawquerystring": "state:\"a+b\"",
"querystring": "state:\"a+b\"",
"parsedquery": "PhraseQuery(state:\"a b\")",
"parsedquery_toString": "state:\"a b\"",
So, the same record is found if I use query like q=state:"a(b", which results into the same phrased query and finds the record with state of a+b. That is unacceptable to me, because I need an exact match.
I went through https://cwiki.apache.org/confluence/display/solr/Other+Parsers, and tried using q={!term f=state}a+b or q={!raw f=state}a+b, but neither even finds my sample transaction record.
Probably you got state generated as a TextField where standard tokenization is applied StandardTokenizer and then a split is made on + and the plus sign itself is discarded. You could use a different tokenizer (whitespace?) or just make state an StrField for an exact match.
This works for me with state as an StrField:
select * from transactions where solr_query='state:a+b';
I have heard that specifying records through tuples in the code is a bad practice: I should always use record fields (#record_name{record_field = something}) instead of plain tuples {record_name, value1, value2, something}.
But how do I match the record against an ETS table? If I have a table with records, I can only match with the following:
ets:match(Table, {$1,$2,$3,something}
It is obvious that once I add some new fields to the record definition this pattern match will stop working.
Instead, I would like to use something like this:
ets:match(Table, #record_name{record_field=something})
Unfortunately, it returns an empty list.
The cause of your problem is what the unspecified fields are set to when you do a #record_name{record_field=something}. This is the syntax for creating a record, here you are creating a record/tuple which ETS will interpret as a pattern. When you create a record then all the unspecified fields will get their default values, either ones defined in the record definition or the default default value undefined.
So if you want to give fields specific values then you must explicitly do this in the record, for example #record_name{f1='$1',f2='$2',record_field=something}. Often when using records and ets you want to set all the unspecified fields to '_', the "don't care variable" for ets matching. There is a special syntax for this using the special, and otherwise illegal, field name _. For example #record_name{record_field=something,_='_'}.
Note that in your example you have set the the record name element in the tuple to '$1'. The tuple representing a record always has the record name as the first element. This means that when you create the ets table you should set the key position with {keypos,Pos} to something other than the default 1 otherwise there won't be any indexing and worse if you have a table of type 'set' or 'ordered_set' you will only get 1 element in the table. To get the index of a record field you can use the syntax #Record.Field, in your example #record_name.record_field.
Try using
ets:match(Table, #record_name{record_field=something, _='_'})
See this for explanation.
Format you are looking for is #record_name{record_field=something, _ = '_'}
http://www.erlang.org/doc/man/ets.html#match-2
http://www.erlang.org/doc/programming_examples/records.html (see 1.3 Creating a record)
I have a model object which did not have a counter cache on it before and I added it via a migration. The thing is, I tried and failed to set the starting value of the counter cache based on the number of child objects I already had in the migration. Any attempt to update the cache value did not get written to the database. I even tried to do it from the console but it was never going to happen. Any attempt to write directly to that value on the parent was ignored.
Changing the number of children updated the counter cache (as it should), and removing the ":counter_cache => true" from the child would let me update the value on the parent. But that's cheating. I needed to be able to add the counter cache and then set its starting value to the number of children in the migration so I could then start with correct values for pages which would show it.
What's the correct way to do that so that ActiveRecord doesn't override me?
You want to use the update_counters method, this blog post has more details:
josh.the-owens.com add a counter cache to an existing db-table
This RailsCasts on the topic is also a good resource:
http://railscasts.com/episodes/23-counter-cache-column
The canonical way is to use reset_counter_cache, i.e.:
Author.find_each do |author|
Author.reset_counter_cache(author.id, :books)
end
...and that's how you should do it if those tables are of modest size, i. e. <= 1,000,000 rows.
BUT: for anything large this will take on the order of days, because it requires two queries for each row, and fully instantiates a model etc.
Here's a way to do it about 5 orders of magnitude faster:
Author
.joins(:books)
.select("authors.id, authors.books_count, count(books.id) as count")
.group("authors.id")
.having("authors.books_count != count(books.id)")
.pluck(:id, :books_count, "count(books.id)")
.each_with_index do |(author_id, old_count, fixed_count), index|
puts "at index %7i: fixed author id %7i, new books_count %4i, previous count %4i" % [index, author_id, fixed_count, old_count] if index % 1000 == 0
Author.update_counters(author_id, books_count: fixed_count - old_count)
end
It's also possible to do it directly in SQL using just a single query, but the above worked well enough for me. Note the somewhat convoluted way it uses the difference of the previous count to the correct one: this is necessary because update_counters doesn't allow setting an absolute value, but only to increase/decrease it. The column is otherwise marked readonly.