I have been searching for the definitive answer on this but can't seem to find it anywhere. What I do know is that it's legal to create a Collection of the OData 4.0 Enumeration type. I also know that the Olingo parser that we are using (4.2) allows us to query both an Enumeration or Collection of Enumeration using the 'has' keyword. What I can't find, however, is any documentation that proves that this is actually a legitimate query. I also know that using OLingo and Microsoft parsers, that the any/all syntax that would generally be used for a Collection does not seem to work. I would really appreciate any help figuring this out.
The OData version 4 spec says this about logical operators: "Operands of collection, entity, and complex types are not supported in logical operators." has is one of the logical operators. Therefore, has is not supported on a collection of enumeration type. Furthermore, has is defined to operate on a single enumeration value, so it is not appropriate for querying a collection of enumeration values.
The spec also says that the lambda operators operate on "a navigation path that identifies a collection". If navigation path means a path that ends with a navigation property, then any/all can only be applied to a collection identified by a navigation property. Since collections of enumeration type are represented by structural properties, it follows that any/all cannot be applied to collections of enumeration type. But this is conjecture, since the term navigation path is not defined in the spec.
In lieu of a collection of enumeration type, consider using an enumeration type that has the IsFlags attribute set. You can definitely query such an enumeration using the has operator.
In my code I want to take advantage of ETS's bag type that can store multiple values for single key. However, it would be very useful to know if insertion actually inserts a new value or not (i.e. if the inserted key with value was or was not present in the bag).
With type set of ETS I could use ets:insert_new, but semantics is different for bag (emphasis mine):
This function works exactly like insert/2, with the exception that instead of overwriting objects with the same key (in the case of set or ordered_set) or adding more objects with keys already existing in the table (in the case of bag and duplicate_bag), it simply returns false.
Is there a way to achieve such functionality with one call? I understand it can be achieved by a lookup followed by an optional insert, but I am afraid it might hurt performance of concurrent access.
Contrary to what's possible with the Java API, there doesn't seem to be a way to specify whether a numeric property is a byte, short, int or long:
CREATE (n:Test {value: 1}) RETURN n
always seems to create a long property. I've tried toInt(), but it is obviously understood in the mathematical sense of "integer" more than in the computer data type sense.
Is there some way I'm overlooking to actually force the type?
We have defined a model and want to insert test data using Cypher statements, but the code using the data then fails with a ClassCastException since the types don't match.
If you run your cypher queries with the embedded API then
you can provide parameters in a hashmap with the correctly typed values.
For remote users it doesn't really matter as it goes through JSON serialization back and forth which looses the type information anyway. So it is just "numeric".
Why do you care about the numeric type?
you can also just use ((Number)n.getProperty("value")).xxxValue() (xxx = int,long,byte)
I have the following two cypher calls that I'd like to combine into one;
start r=relationship:link("key:\"foo\" and value:\"bar\"") return r.guid
This returns a relationship that contains a guid that I need based on a key value pair (in this case key:foo and value:bar).
Lets assume r.guid above returns 12345.
I then need all the property relationships for the object in question based on the returned guid and a property type key;
start r=relationship:properties("to:\"12345\" and key:\"baz\"") return r
This returns several relationships which have the values I need, in this case all property types baz that belong to guid 12345.
How do I combine these two calls into one? I'm sure its simple but I'm stumbling..
The answer I've gotten is that there is no way to perform an index lookup in the middle of a Cypher query, or to use a variable you have declared to perform the lookup.
Perhaps in later version of Cypher, as this ability should be standard especially with the dense node issue and the suggested solution of indexing.
I'm playing around with neo4j, and I was wondering, is it common to have a type property on nodes that specify what type of Node it is? I've tried searching for this practice, and I've seen some people use name for a purpose like this, but I was wondering if it was considered a good practice or if indexes would be the more practical method?
An example would be a "User" node, which would have type: user, this way if the index was bad, I would be able to do an all-node scan and look for types of user.
Labels have been added to neo4j 2.0. They fix this problem.
You can create nodes with labels:
CREATE (me:American {name: "Emil"}) RETURN me;
You can match on labels:
MATCH (n:American)
WHERE n.name = 'Emil'
RETURN n
You can set any number of labels on a node:
MATCH (n)
WHERE n.name='Emil'
SET n :Swedish:Bossman
RETURN n
You can delete any number of labels on a node:
MATCH (n { name: 'Emil' })
REMOVE n:Swedish
Etc...
True, it does depend on your use case.
If you add a type property and then wish to find all users, then you're in potential trouble as you've got to examine that property on every node to get to the users. In that case, the index would probably do better- but not in cases where you need to query for all users with conditions and relations not available in the index (unless of course, your index is the source of the "start").
If you have graphs like mine, where a relation type implies two different node types like A-(knows)-(B) and A or B can be a User or a Customer, then it doesn't work.
So your use case is really important- it's easy to model graphs generically, but important to "tune" it as per your usage pattern.
IMHO you shouldn't have to put a type property on the node. Instead, a common way to reference all nodes of a specific "type" is to connect all user nodes to a node called "Users" maybe. That way starting at the Users node, you can very easily find all user nodes. The "Users" node itself can be indexed so you can find it easily, or it can be connected to the reference node.
I think it's really up to you. Some people like indexed type attributes, but I find that they're mostly useful when you have other indexed attributes to narrow down the number of index hits (search for all users over age 21, for example).
That said, as #Luanne points out, most of us try to solve the problem in-graph first. Another way to do that (and the more natural way, in my opinion) is to use the relationship type to infer a practical node type, i.e. "A - (knows) -> B", so A must be a user or some other thing that can "know", and B must be another user, a topic, or some other object that can "be known".
For client APIs, modeling the element type as a property makes it easy to instantiate the right domain object in your client-side code so I always include a type property on each node/vertex.
The "type" var name is commonly used for this, but in some languages like Python, "type" is a reserved word so I use "element_type" in Bulbs ( http://bulbflow.com/quickstart/#models ).
This is not needed for edges/relationships because they already contain a type (the label) -- note that Neo4j also uses the keyword "type" instead of label for relationships.
I'd say it's common practice. As an example, this is exactly how Spring Data Neo4j knows of which entity type a certain node is. Each node has "type" property that contains the qualified class name of the entity. These properties are automatically indexed in the "types" index, thus nodes can be looked up really fast. You could implement your use case exactly like this.
Labels have recently been added to Neo4j 2.0 ( http://docs.neo4j.org/chunked/milestone/graphdb-neo4j-labels.html ). They are still under development at the moment, but they address this exact problem.