getOrCreate for java-rest-api neo4j fails - neo4j

I have a simple relationship test that I am trying to run to create a unique node using Rest API (java-rest-binding) https://github.com/neo4j/java-rest-binding but unfortunately I am stuck on something, here are the details: (the non-unique node and relationship works perfectly fine, its with this that it doesn't, most likely I am doing something naive (pardon my lack on knowledge of neo4j).
final UserModel userModel = new UserModel();
final HashMap<String, Object> uModelAttributes = new HashMap<String, Object>(0);
uModelAttributes.put("name", "AnirudhVyas");
userModel.setAttributes(uModelAttributes);
final HashSet<Action> buyHistory = new HashSet<Action>();
final Action buyAction = new Action();
final ProductModel productModel = new ProductModel();
final HashMap<String, Object> attributes = new HashMap<String, Object>(0);
attributes.put("name", "mercedes benz ");
attributes.put("make", "mercedes benz");
attributes.put("model", "sls 550");
attributes.put("year", "2014");
productModel.setAttributes(attributes);
buyAction.setProduct(productModel);
buyHistory.add(buyAction);
userModel.setBuyHistory(buyHistory);
System.out.println("Before");
new UserModelDAO().createCompleteTree(userModel);
System.out.println("Completed >>>
if i use this on the dao:
final RestNode root = api.getOrCreateNode(api.index().forNodes("users", MapUtil.stringMap(IndexManager.PROVIDER, "lucene", "type", "fulltext")), "name", m
.getAttributes().get("name"), m.getAttributes());
api.getOrCreateNode(api.index().forNodes("products", MapUtil.stringMap(IndexManager.PROVIDER, "lucene", "type", "fulltext")), "name", buyAction.getProduct().getAttributes().get("name"), buyAction.getProduct().getAttributes()), RelationshipTypes.BOUGHT);
This basically Fails with:
java.lang.RuntimeException: Error retrieving or creating node for key name and value AnirudhVyas with index users
at org.neo4j.rest.graphdb.ExecutingRestAPI.getOrCreateNode(ExecutingRestAPI.java:448)
at org.neo4j.rest.graphdb.RestAPIFacade.getOrCreateNode(RestAPIFacade.java:223)
at xxxx.xxxx.xxxx.graph.UserModelCreateTasteKeyNeo4JBatchCallback.recordBatch(UserModelCreateTasteKeyNeo4JBatchCallback.java:61)

There are several ways to do it, one of them is using Cypher:
MATCH a
WHERE a.name! = 'nameTofound'
CREATE UNIQUE a-[:Relationship]-c:LABEL
RETURN c
Use it inside a queryEngine.
It works like a charm, please look at the following link for more details:
http://docs.neo4j.org/chunked/milestone/query-create-unique.html
Java code is here:
protected static RestNode createOrGetExistingNode(String key, String valueFor, String Rel, String label){
RestNode node = null;
final QueryResult<Map<String, Object>> result = GraphRestService.queryEngine.query(String.format("MATCH node WHERE node.%s! ='%s' " +
"CREATE UNIQUE node-[:%s]-c:%s RETURN c" , key, valueFor, Rel, label), MapUtil.map("reference", 0));
for (Map<String, Object> column : result) {
node = (RestNode) column.get("c");
}
return node;
}

I solved the problem by not using batch rest callback - When I simply use - getOrCreateXXX( ) for RestAPI it works like a charm - Need to investigate further as to why getOrCreate on BatchCallback#recordBatch( ) will behave differently.

Related

How to do group by key on custom logic in cloud data flow

I am trying to achieve the Groupby key based on custom object in cloud data flow pipe line.
public static void main(String[] args) {
Pipeline pipeline = Pipeline.create(PipelineOptionsFactory.create());
List<KV<Student,StudentValues>> studentList = new ArrayList<>();
studentList.add(KV.of(new Student("pawan", 10,"govt"),
new StudentValues("V1", 123,"govt")));
studentList.add(KV.of(new Student("pawan", 13223,"word"),
new StudentValues("V2", 456,"govt")));
PCollection<KV<Student,StudentValues>> pc =
pipeline.apply(Create.of(studentList));
PCollection<KV<Student, Iterable<StudentValues>>> groupedWords =
pc.apply(GroupByKey.<Student,StudentValues>create());
}
I just wanted to groupBy both the PCollection record based on the Student object.
#DefaultCoder(AvroCoder.class)
static class Student /*implements Serializable*/{
public Student(){}
public Student(String n, Integer i, String sc){
name = n;
id = i;
school = sc;
}
public String name;
public Integer id;
public String school;
#Override
public boolean equals(Object obj) {
System.out.println("obj = "+obj);
System.out.println("this = "+this);
Student stObj= (Student)obj;
if (stObj.Name== this.Name){
return true;
} else{
return false;
}
}
}
I have overridden the equals method of my custom class, but each time i am getting same instance of Student object to compare inside equals method.
Ideally it sholud compare first student key with second one.
Whats wrong i am doing here.
Why do you think you are doing anything wrong? The keys of each element are serialized (using the AvroCoder you specified) and the GroupByKey can group all of the elements with the same serialized representation together. After that it doesn't need to compare the students to make sure that the values with the same key have been grouped together.

How to get the direct relationship entities and directly related nodes in custom query in SDN4?

I have an annotated finder method in my repository:
#Query("MATCH (me:User)<-[ab:ASKED_BY]-(q:Question) WHERE id(me) = {0} RETURN q")
Iterable<Question> findQuestionsByUserId(Long id);
My objects like:
#NodeEntity
public class Question {
private AskedBy askedBy;
#Relationship(type = "TAGGED_WITH")
private Set<Tag> tags = new HashSet<>();
//...
}
#RelationshipEntity(type = "ASKED_BY")
public class AskedBy {
#GraphId private Long id;
#StartNode
private User user;
#EndNode
private Question question;
// other props
}
When I call the repository method, the askedBy field is null in the result. How can I populate that field with the relationship?
Update:
I have tried to load the relationship with session loadAll(collection) but it did not help.
final Collection<Question> questions = (Collection<Question>) questionRepository.findQuestionsByUserId(user.getId());
final Question q = questions.iterator().next();
System.out.println("After `findQuestionsByUserId`:");
System.out.println("`q.getTags().size()`: " + q.getTags().size());
System.out.println("`q.getAskedBy()`: " + q.getAskedBy());
neo4jOperations.loadAll(questions, 1);
System.out.println("After `neo4jOperations.loadAll(questions, 1)`:");
System.out.println("`q.getTags().size()`: " + q.getTags().size());
System.out.println("`q.getAskedBy()`: " + q.getAskedBy());
final Collection<AskedBy> askedByCollection = neo4jOperations.loadAll(AskedBy.class);
System.out.println("`askedByCollection.size()`: " + askedByCollection.size());
The above snippet outputs
After findQuestionsByUserId:
q.getTags().size(): 0
q.getAskedBy(): null
After neo4jOperations.loadAll(questions, 1):
q.getTags().size(): 1
q.getAskedBy(): null
askedByCollection.size(): 0
So it seems the default depth is 0 for the custom query, and for some unknown reason I can not load the relationship entity.
The graph looks okay:
At the moment, custom queries do not support a depth parameter (it's on the roadmap), so you have the following options-
a) Use repository.findOne(userId) (the default is depth 1 so it should load AskedBy). Or customize the depth with repository.findOne(userId,depth). Or use Neo4jTemplate.load(type,id,depth)
b) If you need to query on more than the id, use the loadAll methods on the org.neo4j.ogm.session.Session that accept a set of org.neo4j.ogm.cypher.Filter. Examples available in MusicIntegrationTest
c) Continue with the custom query but after you get the entity ID back, load it via the load* methods providing a custom depth.

Adding a node to Neo4J database using cypher ExecutionEngine

Created the database as follows:
graphDb = new GraphDatabaseFactory().newEmbeddedDatabase( "D:/TestGraphDatabase" );
cypherEngine = new ExecutionEngine(graphDb, null);
Attempted to add a node in the following manner:
String parentString = "Thing";
String uri = "XXX";
String queryString = "MERGE (owl:{name} {uri: {uri}, name: {name}}) RETURN n";
Map<String, Object> parameters = new HashMap<>();
parameters.put( "name", parentString );
parameters.put( "uri", uri );
resultIterator = (ResourceIterator<Node>) cypherEngine.execute(queryString, parameters).columnAs("n");
result = resultIterator.next();
tx.success();
return result;
This gives me a null pointer exception:
at org.neo4j.cypher.ExecutionEngine.planQuery(ExecutionEngine.scala:85)at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:75)at org.neo4j.cypher.ExecutionEngine.execute(ExecutionEngine.scala:71)
What am I doing wrong?
Needed to pass a StringLogger.DEV_NULL during the ExecutionEngine initialization.
Use the ExecutionEngine in the javacompat package not the other one (which is from Scala).
Then also the results will be easy to handle.
see: http://neo4j.com/docs/stable/tutorials-cypher-java.html
where it says:
Caution:
The classes used here are from the org.neo4j.cypher.javacompat package, not org.neo4j.cypher, see link to the Java API below.

Return JSONObject from server plugin in neo4j

I am attempting to create a server plugin in neo4j to make a specific query and wish to return, not one iterable, but two iterables of Node.
I saw that this is not possible according to the neo4j docs, so I tried to create an array of JSONObject from these arrays and then return it as server plugin result. But it seems that this does not work.
So I am asking if someone has already done such thing?
I have been told on neo4j google group to use Gremlin, but have never use it before and think it is a bit complicated.
Any help would be very appreciated.
Thanks
i eventually got around the problem by merging the two lists i wanted to return before returning a unique list. Hence i could separate them in my python code, since i know where starts each one.
public class Ond extends ServerPlugin {
#PluginTarget(GraphDatabaseService.class)
public static Iterable<Node> getOnd(
#Source GraphDatabaseService graphDb,
#Description("the airline's node ID") #Parameter(name = "id") int id) {
List<Node> results= new ArrayList<Node>();
String n4jQuery= "START al= node("+id+") match ond-[:operatedBy]->al, ond-[:origin]->orig, ond-[:destination]->dest RETURN orig, dest ;";
ExecutionEngine engine= new ExecutionEngine(graphDb);
ExecutionResult result= engine.execute(n4jQuery);
List<Node> orig= new ArrayList<Node>();
List<Node> dest= new ArrayList<Node>();
//creating the lists i want to return
//an outter loop over tables rows
for (Map<String, Object> row : result) {
//an inner loop over the two columns : orig and dest
for (Map.Entry<String, Object> column : row.entrySet()) {
String key = column.getKey();
Node n = (Node) column.getValue();
if(key.equals("dest")){
dest.add(n);
}else{
orig.add(n);
}
}
}
//merging the two lists
results.addAll(orig);
results.addAll(dest);
// orig elements are between indices 0 and size(results)/2 -1
//and dest element are between size(results)/2 and size(results)-1
return results;
}
}
Hope it helps !!

NewtonSoft json Contract Resolver with MVC 4.0 Web Api not producing the output as expected

I am trying to create a conditional ContractResolver so that I can control the serialization differently depending on the web request/controller action.
For example in my User Controller I want to serialize all properties of my User but some of the related objects I might only serialize the primitive types. But if I went to my company controller I want to serialize all the properties of the company but maybe only the primitive ones of the user (because of this I don't want to use dataannotations or shouldserialize functions.
So looking at the custom ContractResolver page i created my own.
http://james.newtonking.com/projects/json/help/index.html?topic=html/ContractResolver.htm
It looks like this
public class IgnoreListContractResolver : DefaultContractResolver
{
private readonly Dictionary<string, List<string>> IgnoreList;
public IgnoreListContractResolver(Dictionary<string, List<string>> i)
{
IgnoreList = i;
}
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
{
List<JsonProperty> properties = base.CreateProperties(type, memberSerialization).ToList();
if(IgnoreList.ContainsKey(type.Name))
{
properties.RemoveAll(x => IgnoreList[type.Name].Contains(x.PropertyName));
}
return properties;
}
}
And then in my web api controller action for GetUsers i do this
public dynamic GetUsers()
{
List<User> Users = db.Users.ToList();
List<string> RoleList = new List<string>();
RoleList.Add("UsersInRole");
List<string> CompanyList = new List<string>();
CompanyList.Add("CompanyAccesses");
CompanyList.Add("ArchivedMemberships");
CompanyList.Add("AddCodes");
Dictionary<string, List<string>> IgnoreList = new Dictionary<string, List<string>>();
IgnoreList.Add("Role", RoleList);
IgnoreList.Add("Company", CompanyList);
GlobalConfiguration
.Configuration
.Formatters.JsonFormatter
.SerializerSettings
.ContractResolver = new IgnoreListContractResolver(IgnoreList);
return new { List = Users, Status = "Success" };
}
So when debugging this I see my contract resolver run and it returns the correct properties but the Json returned to the browser still contains entries for the properties I removed from the list.
Any ideas what I am missing or how I can step into the Json serialization step in webapi controllers.
*UPDATE**
I should add that this is in an MVC4 project that has both MVC controllers and webapi controllers. The User, Company, and Role objects are objects (created by code first) that get loaded from EF5. The controller in question is a web api controller. Not sure why this matters but I tried this in a clean WebApi project (and without EF5) instead of an MVC project and it worked as expected. Does that help identify where the problem might be?
Thanks
*UPDATE 2**
In the same MVC4 project I created an extension method for the Object class which is called ToJson. It uses Newtonsoft.Json.JsonSerializer to serialize my entities. Its this simple.
public static string ToJson(this object o, Dictionary<string, List<string>> IgnoreList)
{
JsonSerializer js = JsonSerializer.Create(new Newtonsoft.Json.JsonSerializerSettings()
{
Formatting = Formatting.Indented,
DateTimeZoneHandling = DateTimeZoneHandling.Utc,
ContractResolver = new IgnoreListContractResolver(IgnoreList),
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
});
js.Converters.Add(new Newtonsoft.Json.Converters.StringEnumConverter());
var jw = new StringWriter();
js.Serialize(jw, o);
return jw.ToString();
}
And then in an MVC action i create a json string like this.
model.jsonUserList = db.Users.ToList().ToJson(IgnoreList);
Where the ignore list is created exactly like my previous post. Again I see the contract resolver run and correctly limit the properties list but the output json string still contains everything (including the properties I removed from the list). Does this help? I must be doing something wrong and now it seems like it isn't the MVC or web api framework. Could this have anything to do with EF interactions/ proxies /etc. Any ideas would be much appreciated.
Thanks
*UPDATE 3***
Process of elimination and a little more thorough debugging made me realize that EF 5 dynamic proxies were messing up my serialization and ContractResolver check for the type name match. So here is my updated IgnoreListContractResolver. At this point I am just looking for opinions on better ways or if I am doing something terrible. I know this is jumping through a lot of hoops just to use my EF objects directly instead of DTOs but in the end I am finding this solution is really flexible.
public class IgnoreListContractResolver : CamelCasePropertyNamesContractResolver
{
private readonly Dictionary<string, List<string>> IgnoreList;
public IgnoreListContractResolver(Dictionary<string, List<string>> i)
{
IgnoreList = i;
}
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
{
List<JsonProperty> properties = base.CreateProperties(type, memberSerialization).ToList();
string typename = type.Name;
if(type.FullName.Contains("System.Data.Entity.DynamicProxies.")) {
typename = type.FullName.Replace("System.Data.Entity.DynamicProxies.", "");
typename = typename.Remove(typename.IndexOf('_'));
}
if (IgnoreList.ContainsKey(typename))
{
//remove anything in the ignore list and ignore case because we are using camel case for json
properties.RemoveAll(x => IgnoreList[typename].Contains(x.PropertyName, StringComparer.CurrentCultureIgnoreCase));
}
return properties;
}
}
I think it might help if you used Type instead of string for the ignore list's key type. So you can avoid naming issues (multiple types with the same name in different namespaces) and you can make use of inheritance. I'm not familiar with EF5 and the proxies, but I guess that the proxy classes derive from your entity classes. So you can check Type.IsAssignableFrom() instead of just checking whether typename is a key in the ignore list.
private readonly Dictionary<Type, List<string>> IgnoreList;
protected override IList<JsonProperty> CreateProperties(Type type, MemberSerialization memberSerialization)
{
List<JsonProperty> properties = base.CreateProperties(type, memberSerialization).ToList();
// look for the first dictionary entry whose key is a superclass of "type"
Type key = IgnoreList.Keys.FirstOrDefault(k => k.IsAssignableFrom(type));
if (key != null)
{
//remove anything in the ignore list and ignore case because we are using camel case for json
properties.RemoveAll(x => IgnoreList[key].Contains(x.PropertyName, StringComparer.CurrentCultureIgnoreCase));
}
return properties;
}
Then the ignore list must be created like this (I also used the short syntax for creating the list and dictionary):
var CompanyList = new List<string> {
"CompanyAccesses",
"ArchivedMemberships",
"AddCodes"
};
var IgnoreList = new Dictionary<Type, List<string>> {
// I just replaced "Company" with typeof(Company) here:
{ typeof(Company), CompanyList }
};
Be aware that, if you use my code above, adding typeof(object) as the first key to the ignore list will cause this entry to be matched every time, and none of your other entries will ever be used! This happens because a variable of type object is assignable from every other type.

Resources