I have successfully used the following to get the shortest path using A* in the APOC library.
apoc.algo.aStar("A", "B", 'Link', 'Length','X','Y') YIELD path, weight
apoc.algo.aStar("A", "B", 'Link', {weight:'Length',default:1, x:'X',y:'Y'}) YIELD path, weight
How would I go about adding a filter so that it only uses edges where "Value" is true. The documentation doesn't provide an example.
public class Node{
public long Id {get;set;}
public string Name {get;set;}
public long X {get;set;}
public long Y {get;set;}
}
public class Link{
public bool Value {get;set;}
public long Length {get;set;}
}
There is no example because this feature is not available.
So you have three choices :
add a Length value very high on relationships where "Value" is true
modify your model by adding the property "Value" in the relationship type (ie to have two types : Link_On and Link_value_Off), so you can use the apoc procedure.
create your own A* procedure by taking example on the one from APOC (source code here)
Related
I am following this tutorial from Microsoft docs for linear regression using ML.Net. I am not able to determine why exactly do we need a class with a score column with the specific column name as 'Score'. I tried changing the column name from Score to something else which results in the following exception:
The classes I have declared for my Input and predictions are as follows:
public class TaxiTrip {
// All the columns inclusive of the label column
[LoadColumn(0)]
public string VendorId;
[LoadColumn(1)]
public string RateCode;
[LoadColumn(2)]
public float PassengerCount;
[LoadColumn(3)]
public float TripTime;
[LoadColumn(4)]
public float TripDistance;
[LoadColumn(5)]
public string PaymentType;
[LoadColumn(6)]
public float FareAmount;
}
public class TaxiTripFarePrediction {
[ColumnName("PredictionScore")]
public float FareAmount;
}
Also as I never specified the class TaxiTripFarePrediction anywhere either in the training function or in the Evaluate function as shown above how does the model work when I specify the column name as Score instead of something else?
The similar scenario was seen for the Label column.
The TaxiTripFarePrediction class represents predicted results. It has a single float field, FareAmount, with a Score attribute applied. In case of the regression task, the Score column contains predicted label values.
I'm new in ML and I think Score and Features cannot change the properties. You can refer this link: https://learn.microsoft.com/en-us/dotnet/machine-learning/how-does-mldotnet-work#mlnet-architecture
You can see this image. Score and Features are generated by algorithms
Suppose, we have the following data in neo4j DB ->
The java entity representation is as follows ->
#NodeEntity
public class Place {
#Id
#GeneratedValue
private Long id;
private String name;
}
#NodeEntity
public class Monument {
#Id
#GeneratedValue
private Long id;
private String name;
}
#NodeEntity
public class Person {
#Id
#GeneratedValue
private Long id;
private String name;
#Relationship(type = "VISITED", direction = Relationship.OUTGOING)
private Monument monument;
#Relationship(type = "STAYS", direction = Relationship.OUTGOING)
private Place place;
}
Now, I want to fetch all persons also populating the linked place and monument, if present. This means, the cypher query will not only provide me List< Person> as result and the Monument and Place object should also be linked with each Person object, if the links are available (and otherwise null).
To clarify it further, for Person 'Ron', I should be able to see the monument he visited and the place he stays, without performing any more queries to fetch the relationships. Similarly, for Person 'April', I should be able to see where she stays, but will not know which monument she visited because there is no link there.
With my basic knowledge in Cypher Query language, I have tried but could not get the desired result.
If I provide both the relations in the query and fetch the corresponding pattern variable, I only get the Person 'Ron' in the result.
MATCH
p=(place:Place)<-[STAYS]-(person:Person)-[VISITED]->(monument:Monument) RETURN p
If I only provide the relation 'STAYS', I get 'Ron' and 'April'.
MATCH p=(person:Person)-[STAYS]->(place:Place) RETURN p
If I query without the relationships, I only get the Person object and the monument and place are not linked [getMonument() and getPlace() is null even for Person 'Ron'].
MATCH p=(person:Person) RETURN p
I could not find a query where I get all of these.
You need to put the relations into optional matches, like this:
MATCH (person:Person)
OPTIONAL MATCH (person)-[:VISITED]->(monument)
OPTIONAL MATCH (person)-[:STAYS]->(place)
return person, place, monument
Otherwise, neo4j treats the relations in your query 1) as required, that's why 'Ron' will be the only result.
I have a following SDN 4 node entity:
#NodeEntity
public class Product {
#Index(unique = false)
private String name;
...
}
Inside of this entity I have added name property and declared an index.
Right now I'm going to implement case insensitive search by product name.
I have created a SDN 4 repository method:
#Query("MATCH (p:Product) WHERE LOWER(d.name) = LOWER({name}) RETURN p")
Product findByName(#Param("name") String name);
In order to search a product I use following Cypher: LOWER(d.name) = LOWER({name})
I think the index doesn't effectively work in this case because I lowercase the strings.
What is a proper way in Neo4j/SDN 4 to make index working here ?
If you do not need to store the name in the original case, then convert the name to lowercase before storing it in the DB.
If you do need the name in the original case, then you could add an extra property (say, "lower_name") that stores the lowercased name. You can index that property and use it for indexed comparisons.
A third choice is to use legacy indexing, which is much more complex to use and no longer favored. However, it does support case-insensitive indexing (see the second example on this page).
I'm currently struggling with a problem regarding combinatorics.
For a prototype I wanted to try neo4j-ogm.
This is the domain model I designed so far:
#NodeEntity
public class Operand {
#GraphId
private Long graphId;
}
#NodeEntity
public class Option extends Operand {
private Long id;
private String name;
#Relationship(type = "CONDITIONED_BY")
private List<Rule> rules = new ArrayList<>();
}
#NodeEntity
public class Operation extends Operand {
#Relationship(type = "COMPOSITION", direction = Relationship.INCOMING)
private Rule composition;
private Operation superOperation;
private Boolean not;
private List<Operand> operands = new ArrayList<>();
}
public class AndOperation extends Operation {
// Relationships named accordingly "AND"
}
public class OrOperation extends Operation {
// Relationships named accordingly "OR"
}
#NodeEntity
public class Rule {
#GraphId
private Long graphId;
#Relationship(type = "COMPOSITION")
private Operation composition;
#Relationship(type = "CONDITIONED_BY", direction = Relationship.INCOMING)
private Option option;
}
This is a snippet of my graph:
representing something like (167079 ^ ...) & (167155 ^ ...)
Is it possible to form a query in cypher resulting in all possible combinations?
167079, 167155
167079, 167092
...
I found this resource dealing with a distantly related problem so far.
Do you think this is a proper use case for neo4j?
Do you propose any changes in the domain model?
Do you suggest any other technology?
Edit:
The example graph only shows a small part of my original graph, I'd have to calculate the permutation with various depth and various encapsulated operations.
Yes, we can get this by collecting on each side the equation (two collections) and then unwinding each collection.
Assuming the two collections are passed in as parameters:
UNWIND {first} as first
UNWIND {second} as second
RETURN first, second
This should give you a cartesian product between elements of both collections.
EDIT
If know know how many AND branches you have (let's say 2 for example), you can form your query like this:
MATCH (first)<-[:OR*0..]-()<-[:AND]-(root)-[:AND]->()-[:OR*0..]->(second)
WHERE id(root) = 168153
RETURN first, second
(edit, the match itself will generate the cartesian product for you here)
As for operations where you don't know how many AND branches you have, I don't believe you can accomplish this using this approach with just Cypher, as I don't believe dynamic columns are supported.
You might be able to do this using collections, though the operations would be tricky.
A custom procedure to perform cross products given two collections would be very helpful in this case. If you could collect all collections of nodes along AND branches, you could run REDUCE() on that, applying the cross product procedure with each collection.
As far as more complex trees with more operations, I don't believe you'll be able to do this through Cypher. I'd highly recommend a custom procedure, where you'll have far more control and be able to use conditional logic, recursion, and method calls.
I have a large collection of documents containing geospatial point data (among other data). I have another large collection of documents containing polygons (among other data).
I want to filter queries on the point data by whether the points are included in any of the polygons that have a certain property.
Can I do this with RavenDB and if so, how?
Things I thought about:
I can't see how I could do this with an index because indexes only map (and/or reduce), so I can not query one collection by another.
I can't just make the query and rely on Raven's result caching, because querying by the set of polygons would quickly make the query length exceed any sensible query length limit.
In RavenDB you can define indexes to deal with spatial data.
Let's assume you have a document with spatial polygon defined as WKT:
(WKT --> Well Known Text markup https://en.wikipedia.org/wiki/Well-known_text)
public class EventWithWKT
{
public string Id { get; set; }
public string Name { get; set; }
public string WKT { get; set; }
}
By polygon WKT I mean something like
POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))
Then you can define index that would handle the WKT like this:
public class EventsWithWKT_ByNameAndWKT : AbstractIndexCreationTask<EventWithWKT>
{
public EventsWithWKT_ByNameAndWKT()
{
Map = events => from e in events
select new
{
Name = e.Name,
WKT = e.WKT
};
Spatial(x => x.WKT, options => options.Geography.Default());
}
}
By calling the "Spatial" method in index definition, RavenDB creates special column that can be then queried with spatial queries.
Now filtering on whether a certain point is within the polygons of the documents or not, can be done with the following query:
var results = session
.Query<EventWithWKT, EventsWithWKT_ByNameAndWKT>()
.Customize(x => x.RelatesToShape("WKT", "POINT (30 10)", SpatialRelation.Within))
.ToList();
That is not the only way to define and use spatial data. You can read more about RavenDB spatial indexes in those documentation articles:
defining spatial indexes
querying spatial indexes