I have a etc table ‘table’ as {key,[val1,val2]}
I selected this data from the table using
ets:select(table,[{{‘$1','$2'},[],['$$']}]).
[[key,["val1",<<"12">>]],
[key,["val2",<<"6">>]],
[key,["val3",<<"16">>]]]
I want to delete a entry matching the part [val1,val2] using this
ets:select_delete(table,[{{‘$1','$2'},[{'==','$2',["val1",<<"12">>]}],['$$']}]).
0
But still when I run select again I get
ets:select(table,[{{‘$1','$2'},[],['$$']}]).
[[key,["val1",<<"12">>]],
[key,["val2",<<"6">>]],
[key,["val3",<<"16">>]]]
How can I delete this entry based on the non key part?
The ets:select_delete documentation says:
The match specification has to return the atom true if the object is to be deleted. No other return value gets the object deleted. So one cannot use the same match specification for looking up elements as for deleting them.
So try this:
ets:select_delete(table,[{{'$1','$2'},[{'==','$2',["val1",<<"12">>]}],true}]).
ets:select_delete returns the number of records it deleted, so hopefully it should return 1 this time.
Related
Hello I want to delete all nodes with the label GRAPH_OBJECT that have a property value (lets call it myprop) that is not in a list of numeric values that I have in a CSV or text file.
How do a I accomplish this with Cypher?
This should work.
// load csv
LOAD CSV FROM "file://values.txt" AS row
// create a collection of the first column turned into numeric values
WITH collect(toInt(row[0])) AS blacklist
// find the nodes
MATCH (node:GRAPH_OBJECT)
// for any of the properties of the node, if it's value is in our blacklist
WHERE ANY(property in keys(node) WHERE node[property] IN blacklist)
// delete node and relationships
DETACH DELETE node;
Starting with Michael Hunger's code and updating with your comment, I believe this should work:
// load csv
LOAD CSV FROM "file://values.txt" AS row
// create a collection of the first column turned into numeric values
WITH collect(toInt(row[0])) AS whitelist
// find the nodes
MATCH (node:GRAPH_OBJECT)
// for any of the properties of the node, if it's value is in our blacklist
WHERE NOT node.myprop IN whitelist)
// delete node and relationships
DETACH DELETE node;
The first WHERE clause in Michael's code (WHERE ANY(property in keys(node)) appears to just be there so each property on the node can be search, so if you only need to search myprop then this should not be needed.
This should work quite fast as I also use Index. Firstly, you can create an index on the property that you use as a reference to compare the nodes in the CSV.
In your case,
CREATE INDEX ON :GRAPH_OBJECT(myprop)
Then, you can do something like this to delete those node which are not present in the CSV but in your Database.
LOAD CSV FROM "file://values.csv" AS line
WITH collect(line.myprop) AS blacklist
//Assuming there is a header in the CSV with 'myprop' value which compare your existing database node with this property
MATCH (node:GRAPH_OBJECT)
WHERE EXISTS (node.myprop)
AND
NOT node.myprop IN blacklist
DETACH DELETE node;
That's it, You can add a PROFILE to see how well the query performs using Index
I have Disclosure model that have accession_number column. The column have unique constraint.
And when there is an array of accession_numbers, how can I know if there is accession_numbers that is not used yet.
I'm currently check existence for every numbers, but I think there is better way for this behavior.
accession_numbers.select{|number| !Disclosure.where(accession_number: number).exists?}
You can query for all disclosures which have an accession_number in your array.
existing = Disclosure.where(accession_number: accession_numbers).pluck(:accession_number)
Then just remove the existing ones from your array
accession_numbers - existing
Since you already have an unique constraint at DB level,
exiting_accessions = Disclosure.pluck(:accession_number)
results in an array of existing accesssions.
accession_array - existing_accessions results in an array of unused accessions.
I have a data structure which is like this (the real has more nodes levels):
-------Node 1------
| |
| |
Node A: Node B:
-element 1A1 -element 1B1
-element 1A2 -element 1B2
Each element is identified by parents IDs. Each node and element may store or may don't store some value. The values are inherited from parents. So when I want to find value for 1A2, I:
1) check if value for 1A2 exists
2) if not, check if value for A exists
3) if not, check if value for 1 exists
4) save the first found value
The structure is stored in database and it's much more complex. So the problem is that database queries are too slow. But this structure doesn't change often, so I decided to build a server-side cache for it. The cache is removed after any changes in the structure and is rebuild when there is the first attempt to read value of some element. The problem is that, when the cache keys are like this:
"1-A-1" for 1A1 value
"1-A-2" for 1A2 value
"1-A" for A value
"1" for 1 value
1A has value and 1A1 doesn't have value but 1A2 does, the first cache stored is "1-A". Then when I try to find value for 1A2, I first search through the cache and there I find the key "1-A" which fits for my element as these are right nodes. So I don't make any query to database as I assume, the value found in the cache is right. But it is not.
How can I solve this problem? Is there any solution? I want to make the least queries as possible, but I always want to find the exact value for given element.
I have a mapping that filters out a number of IDs from source flat file and then inserts it into target table. I want to add a condition to check whether the ID exists in target table, and if the ID doesn't exist, the row should be added to error file. How can I get this done? I know we can use a dynamic lookup but that will only insert or update into target table, which is not what I want.
Do a normal lookup on the target. If the return value is null, then route it to the error file using a router.
Since you want to write the unmatched rows to error file, use DD_REJECT in update strategy trans based on output from lookup
eg: IIF (NOT ISNULL(col_1), DD_REJECT, DD_INSERT)
col_1 is output from LKP
I am trying to get answer for the last few weeks. I want to give a textfield to users in the app and ask them to enter a id number and this number will be checked with the uploaded csv file columns. If it matches then display an alert saying that its found a match or else it doesn't.
A csv file is basically a normal file in which items are separated by commas.
For example:
id,name,email
1,Joe,joe#d.com
2,Pat,pat#d.com
To get the list of IDs stored in this csv files, simply loop through the lines, and grab the first item after splitting the string (use comma to split)
id = line.split(",").get(0); // first element ** Note this depends on what language you're using
Now, add this id to a collection of ids you are storing, like in a list.
IDs.add(id);
When you take the user input, all you need to do is to check if the id is in your list of ids.
if (IDs.contains(userId)) { print "Found"; }