Neo4j Uniquness on update unique index value - neo4j

The
uniqueness=create_or_fail
works great when creating a new node since it throws a 4xx response if a duplicate index key/value already exists.
However, if the node already exists and indexed and the indexed value needs to be updated, there is no way (that i am aware of) to update the value and fail if the new value already exists. That is because the Add Node to Index REST call does not throw a 4xx response if the new value already exists. as far as i can see the add node to index does not even participate in Uniqueness on indexes.
One solution is to delete the node and re-add it but this is not easy since all the other indexes and relationships on this node would have to be recreated.
another solution would be to add the Uniqueness parameter to the Add Node to Index REST call
http://docs.neo4j.org/chunked/1.9.M05/rest-api-indexes.html#rest-api-add-node-to-index
any other ideas on this?
thanks

I happened up on this question and here's what I figured out for a work-around.
During an update do as follows in a REST batch:
Delete all of node's index entries for the desired index
Create a new node using CreateOrFail on the desired index, except instead of using your normal properties just use a dummy property such as DeleteMe=true
Add the node to the desired index because if it got this far the previous step succeeded
Update node's properties
Use a Cypher statement to delete the dummy node Ex:
START n=node:index_name(index_key={value}) WHERE (n.DeleteMe!)=true DELETE n

Related

Microsoft Graph API remove Check List Item

I have a plannerTask and in its Details it has a CheckList. I use it to programatically insert CheckListItems in it, and it all works like a charm when inserting or retrieving the tasks.
My problem arrives when I am going to insert a new CheckListItem and the CheckList already has 20 items. It returns a MaximumChecklistItemsOnTask (because it is forbidden to insert more than 20 items in a check list).
Solution could be to remove the oldest item, but I am not able to do it. I have tried this:
var elementToRemove = oldDetails.Checklist.Where(c => c.Value.IsChecked).OrderBy(c => c.Value.LastModifiedDateTime).First();
oldDetails.Checklist = oldDetails.Checklist.Where(c => c.Value.LastModifiedDateTime <> elementToRemove.Value.LastModifiedDateTime);
But it throws a casting error in the second line:
Unable to cast object of type
'WhereEnumerableIterator1[System.Collections.Generic.KeyValuePair2[System.String,Microsoft.Graph.PlannerChecklistItem]]'
to type 'Microsoft.Graph.PlannerChecklistItems'.
Which is the right way to remove the oldest element from the ChecklistItem?
UPDATE:
In first place I retrieve a plannerTask from the server. Then I get the details from this plannerTask. So oldDetails is a plannertaskdetails object (https://learn.microsoft.com/en-us/graph/api/resources/plannertaskdetails?view=graph-rest-1.0). Inside the plannertaskdetails object (oldDetails), I have the plannerchecklistitems object (oldDetails.Checklist): https://learn.microsoft.com/en-us/graph/api/resources/plannerchecklistitems?view=graph-rest-1.0.
If plannerchecklistitems were just a List, it would be as easy as list.Remove(item), but it is not a normal list, and that is why I am not able to remove the item.
UPDATE 2:
I have found this way to remove the item from oldDetails:
oldDetails.Checklist.AdditionalData.Remove(elementToRemove.Key)
But, the the way I send the changes to the server is this:
await graphClient.Planner.Tasks(plannerTask.Id).Details.Request().Header("If-Match", oldDetails.GetEtag).UpdateAsync(newDetails)
As it is a PATCH request (not a PUT one), I only have in newDetails the records that have changed, it is, the new records. How could I specify there that a record has been deleted from the list? Sorry if my English is not good enough to express myself properly, but what I mean is that newDetails is not the full list, it only contains the records that must be added and I do not know how to specify in that request that one record must be deleted.

Looping through list of objects created by createEntry() and editing their properties before doing submitChange()

Good afternoon fellow developers,
I have come across a scenario where I found myself needing to retrieve the list of pending changes from my model and editing a specific property of those entries before sending them to my back-end.
These are new entities I created using the createEntry() method of the OData model v2. But, at the time of creation of said entities, I do not possess the value I need to add to them yet. This is the list of entities I retrieve by using the getPendingChanges() method on my model:
What I need to do is to loop through each of these newly created entities and set a specific property into them before actually sending them to my back-end with the submitChanges() method. Bare in mind that these are entry objects created by the createEntry() method and exist only in my front-end until I am able to submit them with success.
Any ideas that might point me in the right direction? I look forward to reading from you!
I was able to solve this issue in the following way:
var oPendingChanges = this.model.getPendingChanges();
var aPathsPendingChanges = $.map(oPendingChanges, function(value, index) { return [index];});
aPathsPendingChanges.forEach(sPath => oModel.setProperty("/" + sPath + "/PropertyX","valueFGO"));
The first two instructions retrieve the entire list of pendingChanges objects and then builds an array of paths to each individual entry. I then use that array of paths to loop through my list of pending changes and edit into the property I want in each iteration of the loop. Special thanks to the folks at answers.sap for the guidance!

How to update a single value of key in every child from firebase?

I am just stucked at this since last few hours and I have tried everything that is possible to update these values in firebase.
I want to update
is_read_p: "0"
to
is_read_p: "1"
for every record in the database.
So far, I have tried this code,
[[[[_mainRef child:#"messages"] child:[NSString stringWithFormat:#"359_361"]]childByAutoId] updateChildValues:#{#"is_read_c":#"0"}];
But, instead of updating, it adds three more child like this:
I know there must be a silly mistake or I might be missing something. Please help me finding that missing part. Thanks. :)
Each time you call childByAutoId, the client creates a reference to a new unique child node. Since you then call updateChildValues on that new location, you're creating a new child node, instead of updating an existing one.
Firebase doesn't support update queries. You'll need to execute the query, process each matching node, and update them individually.
Also see:
Swift Firebase: Update specific objects resulting from Firebase query

Neo4j Embedded Fulltext Automatic Node Index

When running Neo4j embedded, the default configuration doesn't have the automatic node index set as fulltext (meaning that all Lucene queries are case sensitive). How can I configure the automatic index to be fulltext?
For starters, you must perform this on a new database. The automatic index is lazily created, which means that it isn't created until the first access. You have until the first access to perform this configuration. If you attempt to change the property after it's already been created, it won't work. So the first step is to load the database with automatic indexing enabled (node or relationship).
val db = new GraphDatabaseFactory().newEmbedddedDatabaseBuilder("path/to/db").
setConfig(GraphDatabaseSettings.node_keys_indexable, "label,username").
setConfig(GraphDatabaseSettings.node_auto_indexing, "true").newGraphDatabase()
Now, before you do anything, you have to set the configuration properties. You can find out about the possible properties and values here. To do this, we just need two more lines.
val autoIndex = db.index.forNodes("node_auto_index")
db.index.setConfiguration(autoIndex, "type", "fulltext")
And that's all there is to it. You can now create vertices and relationships and the automatic index will be created and populated. You can get use the following code to query it using any Lucene query.
autoIndex.getAutoIndex.query("label:*caseinsensitive*")

Neo4j indexes and legacy data

I have a legacy dataset (ENRON data represented as GraphML) that I would like to query. In an comment in a related question, #StefanArmbruster suggests that I use Cypher to query the database. My query use case is simple: given a message id (a property of the Message node), retrieve the node that has that id, and also retrieve the sender and recipient nodes of that message.
It seems that to do this in Cypher, I first have to create an index of the nodes. Is there a way to do this automatically when the data is loaded from the graphML file? (I had used Gremlin to load the data and create the database.)
I also have an external Lucene index of the data (I need it for other purposes). Does it make sense to have two indexes? I could, for example, index the Neo4J node ids into my external index, and then query the graph based on those ids. My concern is about the persistence of these ids. (By analogy, Lucene document ids should not be treated as persistent.)
So, should I:
Index the Neo4j graph internally to query on message ids using Cypher? (If so, what is the best way to do that: regenerate the database with some suitable incantation to get the index built? Build the index on the already-existing db?)
Store Neo4j node ids in my external Lucene index and retrieve nodes via these stored ids?
UPDATE
I have been trying to get auto-indexing to work with Gremlin and an embedded server, but with no luck. In the documentation it says
The underlying database is auto-indexed, see Section 14.12, “Automatic Indexing” so the script can return the imported node by index lookup.
But when I examine the graph after loading a new database, no indexes seem to exist.
The Neo4j documentation on auto indexing says that a bunch of configuration is required. In addition to setting node_auto_indexing = true, you have to configure it
To actually auto index something, you have to set which properties
should get indexed. You do this by listing the property keys to index
on. In the configuration file, use the node_keys_indexable and
relationship_keys_indexable configuration keys. When using embedded
mode, use the GraphDatabaseSettings.node_keys_indexable and
GraphDatabaseSettings.relationship_keys_indexable configuration keys.
In all cases, the value should be a comma separated list of property
keys to index on.
So is Gremlin supposed to set the GraphDatabaseSettings parameters? I tried passing in a map into the Neo4jGraph constructor like this:
Map<String,String> config = [
'node_auto_indexing':'true',
'node_keys_indexable': 'emailID'
]
Neo4jGraph g = new Neo4jGraph(graphDB, config);
g.loadGraphML("../databases/data.graphml");
but that had no apparent effect on index creation.
UPDATE 2
Rather than configuring the database through Gremlin, I used the examples given in the Neo4j documentation so that my database creation was like this (in Groovy):
protected Neo4jGraph getGraph(String graphDBname, String databaseName) {
boolean populateDB = !new File(graphDBName).exists();
if(populateDB)
println "creating database";
else
println "opening database";
GraphDatabaseService graphDB = new GraphDatabaseFactory().
newEmbeddedDatabaseBuilder( graphDBName ).
setConfig( GraphDatabaseSettings.node_keys_indexable, "emailID" ).
setConfig( GraphDatabaseSettings.node_auto_indexing, "true" ).
setConfig( GraphDatabaseSettings.dump_configuration, "true").
newGraphDatabase();
Neo4jGraph g = new Neo4jGraph(graphDB);
if (populateDB) {
println "Populating graph"
g.loadGraphML(databaseName);
}
return g;
}
and my retrieval was done like this:
ReadableIndex<Node> autoNodeIndex = graph.rawGraph.index()
.getNodeAutoIndexer()
.getAutoIndex();
def node = autoNodeIndex.get( "emailID", "<2614099.1075839927264.JavaMail.evans#thyme>" ).getSingle();
And this seemed to work. Note, however, that the getIndices() call on the Neo4jGraph object still returned an empty list. So the upshot is that I can exercise the Neo4j API correctly, but the Gremlin wrapper seems to be unable to reflect the indexing state. The expression g.idx('node_auto_index') (documented in Gremlin Methods) returns null.
the auto indexes are created lazily. That is - when you have enabled the auto-indexing, the actual index is first created when you index your first property. Make sure you are inserting data before checking the existence of the index, otherwise it might not show up.
For some auto-indexing code (using programmatic configuration), see e.g. https://github.com/neo4j-contrib/rabbithole/blob/master/src/test/java/org/neo4j/community/console/IndexTest.java (this is working with Neo4j 1.8
/peter
Have you tried the automatic index feature? It's basically the use case you're looking for--unfortunately it needs to be enabled before you import the data. (Otherwise you have to remove/add the properties to reindex them.)
http://docs.neo4j.org/chunked/milestone/auto-indexing.html

Resources