I am trying to update a Person entity in Neo4J Community edition 3.0.3 using SDN (spring-data-neo4j 4.1.2.RELEASE). I am seeing a kind of behavior while updating an entity.
I created a 'Person' entity of the name "person" and saved the same
in the database (line 8).
Changed a property (fullName) of the saved entity but
did not update that in the database (line 10).
Retrieved the same person from the database but by using a findBy method into another variable named "person2" line(12).
The changes made in variable "person" (in line 10) are lost.
Both person and person2 variables now
have the same property values.
1.Person person = new Person();
2. person.setUuid(UUID.randomUUID().toString());
3. person.setFullName("P1");
4. person.setEmail("PersonP1#gmail.com");
5. person.setUsername("PersonP1#gmail.com");
6. person.setPhone("123456789");
7. person.setDob(new Date());
8. personService.create(person);
9. System.out.println(person);
//Person{id=27, username='PersonP1#gmail.com', fullName='P1', email='PersonP1#gmail.com'}
10. person.setFullName("P2");
11. System.out.println(person);
//Person{id=27, username='PersonP1#gmail.com', fullName='P2', email='PersonP1#gmail.com'}
12.Person person2 = personService.findByEmail("PersonP1#gmail.com");
13. System.out.println(person2);
//Person{id=27, username='PersonP1#gmail.com', fullName='P1', email='PersonP1#gmail.com'}
14. System.out.println(person);
//Person{id=27, username='PersonP1#gmail.com', fullName='P1', email='PersonP1#gmail.com'}
Is this the default behavior of Neo4J SDN ?
Given below are the pom entries as well as the configuration used for Neo4J as advised in the comment
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-neo4j</artifactId>
<!-- <version>4.1.2.RELEASE</version> -->
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-ogm-core</artifactId>
<version>2.0.4</version>
</dependency>
public class MyNeo4jConfiguration extends Neo4jConfiguration {
#Bean
public org.neo4j.ogm.config.Configuration getConfiguration() {
org.neo4j.ogm.config.Configuration config = new org.neo4j.ogm.config.Configuration();
config
.driverConfiguration()
.setDriverClassName("org.neo4j.ogm.drivers.http.driver.HttpDriver")
.setCredentials("neo4j", "admin")
.setURI("http://localhost:7474");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), "au.threeevolutions.bezzur.domain" );
}
}
This behaviour has been fixed in the latest version of Neo4j OGM- 2.0.4
If you reload an entity that the session is already tracking, the entity properties will not be overwritten i.e. the properties in the session cache are returned, preserving your unpersisted modifications. Note however, that relationships and new nodes may be added to the subgraph in the session if these are pulled in by loading related nodes for example.
Related
I tried to update to EhCache 3, but noticed that my AclConfig for spring-security-acl no longer works. The reason is EhCacheBasedAclCache still uses import net.sf.ehcache.Ehcache. EhCache moved to org.ehcache since version 3 and thus this no longer works. Is there a replacement class provided by spring for EhCache 3 or would i need to implement my own Acl Cache?
This is the code, which no longer works:
#Bean
public EhCacheBasedAclCache aclCache() {
return new EhCacheBasedAclCache(aclEhCacheFactoryBean().getObject(),
permissionGrantingStrategy(), aclAuthorizationStrategy());
}
I added bounty to your question because I'm also looking for a more authoritative answer.
Here's a solution that works, but there could be a better approach & cache settings could be tuned specifically for acl.
1) The JdbcMutableAclService accepts any AclCache implementation, not just EhCacheBasedAclCache. Immediately available implementation is SpringCacheBasedAclCache. You could also implement your own.
2) Enable ehcache3 in your project with Spring Cache as abstraction. In Spring Boot this is as simple as using #EnableCache annotation. Then add #Autowired CacheManager cacheManager in your bean configuration class.
3) Update your ehcache3.xml with entry for aclCache
note - key is Serializable because Spring acl inserts cache entries keyed on both Long and ObjectIdentity :)
<cache alias="aclCache">
<key-type>java.io.Serializable</key-type>
<value-type>org.springframework.security.acls.model.MutableAcl</value-type>
<expiry>
<ttl unit="seconds">3600</ttl>
</expiry>
<resources>
<heap unit="entries">2000</heap>
<offheap unit="MB">10</offheap>
</resources>
</cache>
4) Replace your EhCacheBasedAclCache bean with SpringCacheBasedAclCache like so:
#Bean
public AclCache aclCache() {
return new SpringCacheBasedAclCache(
cacheManager.getCache("aclCache"),
permissionGrantingStrategy(),
aclAuthorizationStrategy());
}
5) Use aclCache() in JdbcMutableAclService constructor
We are using the Jersey Test Frameworks for API testing. In test-mode, we use an h2 database, mysql in production. Everything is fine to this point.
Now i want to write tests for our repositories to check if the data is written properly to the database.
I can't inject any classes in my tests so i am using the standard constructor the create an new instance of RepositoryA. Works for me.
Now the problem: RepositoryA is now injecting an instance of RepositoryB. And injection isn't working on test-scope.
Is it possible to get injection running in this environment?
Depending on the versions of the libraries you are using, running CDI in JUnit Test is different.
First you need to add this dependency, selecting the right version :
<dependency>
<groupId>org.jboss.weld</groupId>
<artifactId>weld-junit5</artifactId> // or weld-junit4
<version>1.3.0.Final</version>
<scope>test</scope>
</dependency>
Then you can enable Weld in your JUnit test. Here is an example of injecting a repository for an entity class called VideoGame :
#Slf4j
#EnableWeld
class VideoGameRepositoryTest
{
#WeldSetup
private WeldInitiator weld = WeldInitiator.performDefaultDiscovery();
#Inject
private VideoGameRepository repo;
#Test
void test()
{
VideoGame videoGame = VideoGameFactory.newInstance();
videoGame.setName("XENON");
repo.save(videoGame);
// testing if the ID field had been generated by the JPA Provider.
Assert.assertNotNull(videoGame.getVersion());
Assert.assertTrue(videoGame.getVersion() > 0);
log.info("Video Game : {}", videoGame);
}
}
The important parts are :
the #EnableWeld placed on the JUnit test class.
the #WeldSetup placed on a WeldInitiator field, to lookup to all annotated classes.
don't forget beans.xml in META-INF of your test classpath in order to setup the discovery-mode.
#Slf4j is a lombok annotation, you don't need it (unless you are already using Lombok)
Here the VideoGameRepository instance benefits injection as well, like in a classical CDI project.
Here is the code of the VideoGameFactory which gets a brand new instance of the entity class marked with #Dependent scope. This factory programmatically invokes the CDI current context.
public class VideoGameFactory
{
public static VideoGame newInstance()
{
// ask CDI for the instance, injecting required dependencies.
return CDI.current().select(VideoGame.class).get();
}
}
Alternately, you can have a look to Arquillian which can come with a full Java EE server in order to have all the needed dependencies.
I'm running into a problem with my Elasticsearch Document index creation failing on startup with "java.lang.IllegalArgumentException: can't add a _parent field that points to an already existing type, that isn't already a parent". I'm not sure if this is due to a version upgrade or b/c I am starting with a brand new Elasticsearch server install.
Contrived example that shows what I'm seeing:
// UserSearchResult.java
#Document(indexName = "hr_index", type = "user")
public class UserSearchResult implements Serializable {
...
#Field(type=FieldType.keyword)
#Parent(type="department")
private String departmentCode;
...
}
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department")
public class DepartmentSearchResult implements Serializable {
...
}
When I start my application I get that exception. If I check the ElasticSearch server, I see the "hr_index" index and the "department" mapping, but the "user" mapping is not created.
If I understand the error, it's because "department" is being created and then when Spring tries to create "user" with "department" as its parent, it doesn't like that, since department wasn't previously marked as a parent when it was created.
Is there some way (via annotation?) to denote DepartmentSearchResult as being a parent when it's created somehow?
Or, is it possible to give a hint to Spring Data Elasticsearch as to what order it should create the indices/mappings? I have seen some other posts (Spring Data Elasticsearch Parent/Child Document Repositories / Test execution error) but disabling auto creation and then manually creating it myself (either as part of my Spring codebase or external to the app) seems kind of "un-Spring-y" to me?
Or, is there some other approach I should be taking?
(This is a working Spring application that had been using Spring 4.2.1 and Spring Data Release Train Gosling, that I'm attempting to upgrade to use Spring 5.0.0 and Spring Data Release Train Kay. As part of this I am starting with a fresh Elasticsearch install, and so I'm not sure if this error is coming from the upgrade or just b/c the install is clean).
In the SD ES, issues related to the parent-child relationship at now really poorly developed.
The problem is most likely due to the fact that you are using a clean installation of Elasticsearch. Before the update, the problem did not arise, because mappings have already been created. For the solution, you can use elasticsearchTemplate, which is part of SD ES, and ApplicationListener. It's simple. Just 3 steps.
Drop index in ES (it only needs one time):
curl -XDELETE [ES_IP]:9200/hr_index
Tell SD ES not to create indices and mappings automatically
// UserSearchResult.java
#Document(indexName = "hr_index", type = "user", createIndex = false)
public class UserSearchResult implements Serializable {
...
#Field(type=FieldType.keyword)
#Parent(type="department")
private String departmentCode;
...
}
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department", createIndex = false)
public class DepartmentSearchResult implements Serializable {
...
}
Add a ApplicationListener:
#Component
public class ApplicationStartupListener implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
//Mapping for child must be created only if mapping for parents doesn't exist
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
elasticsearchTemplate.createIndex(DepartmentSearchResult.class);
try {
elasticsearchTemplate.getMapping(DepartmentSearchResult.class);
} catch (ElasticsearchException e) {
elasticsearchTemplate.putMapping(UserSearchResult.class);
elasticsearchTemplate.putMapping(DepartmentSearchResult.class);
}
}
}
P.S. Among other things, it is worth paying attention to the fact that with the release of ES 5.6, a process for removing types began. This inevitably entails the removal of the parent-child relationship. In one of the next releases of the SD ES, we will provide the opportunity to work with joins. Working with parent-child relationships is unlikely to be improved
This functionality was working at one point but seems to have broken in the latest SDN4 snapshot (7-16-15)
I have two node classes, one representing intermediate, non-leaf nodes and one representing leaf vertex nodes of degree one. The two classes implement a common interface.
public interface Node {
...
}
#NodeEntity
public class SimpleNode implements Node {
...
}
#NodeEntity
public class SimpleLeafNode implements Node {
...
}
The former can be related to other intermediate nodes OR leaf nodes and I have modeled this relationship by mapping the SimpleNode class to the Node INTERFACE:
#RelationshipEntity
public class SimpleRelationship {
#StartNode
private SimpleNode parent;
#EndNode
private Node child;
}
When I try to start up my Spring Boot application, I receive an SDN mapping exception:
Caused by:
10:51:04.173 [DEBUG] org.neo4j.ogm.metadata.MappingException: No identity field found for class: com.sdn4demo.entity.Node
10:51:04.174 [DEBUG] at org.neo4j.ogm.metadata.info.ClassInfo.identityField(ClassInfo.java:291)
10:51:04.174 [DEBUG] at org.springframework.data.neo4j.mapping.Neo4jPersistentProperty.<init>(Neo4jPersistentProperty.java:76)
10:51:04.174 [DEBUG] at org.springframework.data.neo4j.mapping.Neo4jMappingContext.createPersistentProperty(Neo4jMappingContext.java:100)
Again, this was working before the 7-16-15 snapshot so my questions is - is this not supported functionality? Is this a bug?
A contrived example exists at:
https://github.com/simon-lam/sdn-4-demo
Reproduce-able by doing ./gradlew clean test --debug
It's a bug. We're currently working on sorting stuff out regarding SD-commons and Spring DATA REST integration and this is one of those consequences of using the bleeding edge stuff.
Using RC1 is probably the best bet for now. Keep an eye on this JIRA issue to see when it's completed: https://jira.spring.io/browse/DATAGRAPH-564
I am dipping my toes in the Neo4j water, and am running into the following error:
Caused by: org.neo4j.kernel.impl.storemigration.UpgradeNotAllowedByConfigurationException: Failed to start Neo4j with an older data store version. To enable automatic upgrade, please set configuration parameter "allow_store_upgrade=true"
at org.neo4j.kernel.impl.storemigration.ConfigMapUpgradeConfiguration.checkConfigurationAllowsAutomaticUpgrade(ConfigMapUpgradeConfiguration.java:39)
at org.neo4j.kernel.impl.storemigration.StoreUpgrader.attemptUpgrade(StoreUpgrader.java:71)
at org.neo4j.kernel.impl.nioneo.store.StoreFactory.tryToUpgradeStores(StoreFactory.java:144)
at org.neo4j.kernel.impl.nioneo.store.StoreFactory.newNeoStore(StoreFactory.java:119)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource.start(NeoStoreXaDataSource.java:323)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:503)
... 37 more
I have, however, modified the neo4j.properties to include:
Enable this to be able to upgrade a store from an older version
allow_store_upgrade=true
I am using Blueprints and Blueprints-emf. Here is my code:
#ApplicationScoped
public class EmfPersistance {
public void testCall() throws IOException{
EPackage.Registry.INSTANCE.put(EcorePackage.eNS_URI, EcorePackage.eINSTANCE);
EPackage.Registry.INSTANCE.put(DomainPackage.eNS_URI, DomainPackage.eINSTANCE);
Neo4j2Graph graph = new Neo4j2Graph("/home/anton/Documents/software/neo4j/neo4j-community-2.1.7/data/graph.db");
ResourceSet resourceSet = new ResourceSetImpl();
resourceSet
.getResourceFactoryRegistry()
.getExtensionToFactoryMap()
.put("*", new BlueprintsResourceFactory(graph));
resourceSet
.getURIConverter()
.getURIHandlers()
.add(0, new GraphHandler());
Application app = DomainFactory.eINSTANCE.createApplication();
app.setIdentifier("hello");
User user = DomainFactory.eINSTANCE.createUser();
user.setActualName("user");
user.setUsername("user");
user.setEmailAddress("email#gmail.com");
Resource resource = resourceSet.createResource(URI.createURI("graph:/my/graph/users"));
resource.getContents().add(user);
resource.save(null);
graph.shutdown();
}
}
I had the same problem. I added the following dependency along with the blueprints ones and I was able to get my test to run.
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j</artifactId>
<version>2.1.7</version>
</dependency>