I am dipping my toes in the Neo4j water, and am running into the following error:
Caused by: org.neo4j.kernel.impl.storemigration.UpgradeNotAllowedByConfigurationException: Failed to start Neo4j with an older data store version. To enable automatic upgrade, please set configuration parameter "allow_store_upgrade=true"
at org.neo4j.kernel.impl.storemigration.ConfigMapUpgradeConfiguration.checkConfigurationAllowsAutomaticUpgrade(ConfigMapUpgradeConfiguration.java:39)
at org.neo4j.kernel.impl.storemigration.StoreUpgrader.attemptUpgrade(StoreUpgrader.java:71)
at org.neo4j.kernel.impl.nioneo.store.StoreFactory.tryToUpgradeStores(StoreFactory.java:144)
at org.neo4j.kernel.impl.nioneo.store.StoreFactory.newNeoStore(StoreFactory.java:119)
at org.neo4j.kernel.impl.nioneo.xa.NeoStoreXaDataSource.start(NeoStoreXaDataSource.java:323)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:503)
... 37 more
I have, however, modified the neo4j.properties to include:
Enable this to be able to upgrade a store from an older version
allow_store_upgrade=true
I am using Blueprints and Blueprints-emf. Here is my code:
#ApplicationScoped
public class EmfPersistance {
public void testCall() throws IOException{
EPackage.Registry.INSTANCE.put(EcorePackage.eNS_URI, EcorePackage.eINSTANCE);
EPackage.Registry.INSTANCE.put(DomainPackage.eNS_URI, DomainPackage.eINSTANCE);
Neo4j2Graph graph = new Neo4j2Graph("/home/anton/Documents/software/neo4j/neo4j-community-2.1.7/data/graph.db");
ResourceSet resourceSet = new ResourceSetImpl();
resourceSet
.getResourceFactoryRegistry()
.getExtensionToFactoryMap()
.put("*", new BlueprintsResourceFactory(graph));
resourceSet
.getURIConverter()
.getURIHandlers()
.add(0, new GraphHandler());
Application app = DomainFactory.eINSTANCE.createApplication();
app.setIdentifier("hello");
User user = DomainFactory.eINSTANCE.createUser();
user.setActualName("user");
user.setUsername("user");
user.setEmailAddress("email#gmail.com");
Resource resource = resourceSet.createResource(URI.createURI("graph:/my/graph/users"));
resource.getContents().add(user);
resource.save(null);
graph.shutdown();
}
}
I had the same problem. I added the following dependency along with the blueprints ones and I was able to get my test to run.
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j</artifactId>
<version>2.1.7</version>
</dependency>
Related
I have cypher queries that make use of APOC functions. It works without problem if running the app directly but I would also like to test those queries. I tried to use following approach but getting an exception Unknown function 'apoc.coll.toSet'
My sample test class:
public class ApocTest {
private static Neo4j neo4j;
private static Driver driver;
#BeforeAll
static void initializeNeo4j() {
// Make sure that the plugins folder is listed in -cp
Path pluginDirContainingApocJar = Paths.get("src/main/resources/neo4j-plugins/");
if (!Files.exists(pluginDirContainingApocJar)) {
throw new IllegalArgumentException("Invalid path to plugins directory");
}
neo4j = Neo4jBuilders
.newInProcessBuilder()
.withDisabledServer()
.withFixture("CREATE (p1:Person)-[:knows]->(p2:Person)-[:knows]->(p3:Person)")
.withConfig(GraphDatabaseSettings.plugin_dir, pluginDirContainingApocJar)
.withConfig(GraphDatabaseSettings.procedure_unrestricted, List.of("apoc.*"))
.build();
driver = GraphDatabase.driver(neo4j.boltURI(), AuthTokens.none());
}
#AfterAll
static void stopNeo4j() {
driver.close();
neo4j.close();
}
#Test
public void testApoc(){
String query = "MATCH path=()-[:knows*2]->()\n" +
"RETURN apoc.coll.toSet(nodes(path)) AS nodesSet";
List<Object> nodesSet = driver.session()
.beginTransaction()
.run(query)
.single()
.get("nodesSet")
.asList();
assertEquals(3, nodesSet.size());
}
}
Any idea how to fix that?
This sample project on the github
Versions:
neo4j-java-driver: 4.1.1
neo4j-harness 4.1.6
org.neo4j.procedure: 4.1.0.5
Update:
So I tried to update:
Path pluginDirContainingApocJar = new File(
ApocConfig.class.getProtectionDomain().getCodeSource().getLocation().toURI())
.getParentFile().toPath();
That means that I don't need to manipulate with apoc jars, right?
But I'm still getting error:
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.procedure.impl.GlobalProceduresRegistry#27dc627a' was successfully initialized, but failed to start. Please see the attached cause exception "Unable to set up injection for procedure `CypherProcedures`, the field `cypherProceduresHandler` has type `class apoc.custom.CypherProceduresHandler` which is not a known injectable component.".
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:463)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110)
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.startDatabaseServer(DatabaseManagementServiceFactory.java:189)
... 58 more
Caused by: org.neo4j.kernel.api.exceptions.ComponentInjectionException: Unable to set up injection for procedure `CypherProcedures`, the field `cypherProceduresHandler` has type `class apoc.custom.CypherProceduresHandler` which is not a known injectable component.
at org.neo4j.procedure.impl.FieldInjections.createInjector(FieldInjections.java:98)
at org.neo4j.procedure.impl.FieldInjections.setters(FieldInjections.java:81)
at org.neo4j.procedure.impl.ProcedureCompiler.compileProcedure(ProcedureCompiler.java:264)
at org.neo4j.procedure.impl.ProcedureCompiler.compileProcedure(ProcedureCompiler.java:226)
at org.neo4j.procedure.impl.ProcedureJarLoader.loadProcedures(ProcedureJarLoader.java:114)
at org.neo4j.procedure.impl.ProcedureJarLoader.loadProceduresFromDir(ProcedureJarLoader.java:85)
at org.neo4j.procedure.impl.GlobalProceduresRegistry.start(GlobalProceduresRegistry.java:342)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442)
... 60 more
Update 2 - working on 4.0:
For some reason downgrade to Neo4j 4.0, same version as in recommended, was enough to make it working. Now I won't spend more time to try to run it on Neo4j 4.1/4.2.
My code
Probably the way the path to the plugin directory was created. There's an example from Michael Simons here that explains using the neo4j classloader: https://github.com/michael-simons/neo4j-examples-and-tips/blob/master/examples/testing-ogm-against-embedded-with-apoc/src/test/java/org/neo4j/tips/testing/testing_ogm_against_embedded_with_apoc/ApplicationTests.java#L53
We are using the Jersey Test Frameworks for API testing. In test-mode, we use an h2 database, mysql in production. Everything is fine to this point.
Now i want to write tests for our repositories to check if the data is written properly to the database.
I can't inject any classes in my tests so i am using the standard constructor the create an new instance of RepositoryA. Works for me.
Now the problem: RepositoryA is now injecting an instance of RepositoryB. And injection isn't working on test-scope.
Is it possible to get injection running in this environment?
Depending on the versions of the libraries you are using, running CDI in JUnit Test is different.
First you need to add this dependency, selecting the right version :
<dependency>
<groupId>org.jboss.weld</groupId>
<artifactId>weld-junit5</artifactId> // or weld-junit4
<version>1.3.0.Final</version>
<scope>test</scope>
</dependency>
Then you can enable Weld in your JUnit test. Here is an example of injecting a repository for an entity class called VideoGame :
#Slf4j
#EnableWeld
class VideoGameRepositoryTest
{
#WeldSetup
private WeldInitiator weld = WeldInitiator.performDefaultDiscovery();
#Inject
private VideoGameRepository repo;
#Test
void test()
{
VideoGame videoGame = VideoGameFactory.newInstance();
videoGame.setName("XENON");
repo.save(videoGame);
// testing if the ID field had been generated by the JPA Provider.
Assert.assertNotNull(videoGame.getVersion());
Assert.assertTrue(videoGame.getVersion() > 0);
log.info("Video Game : {}", videoGame);
}
}
The important parts are :
the #EnableWeld placed on the JUnit test class.
the #WeldSetup placed on a WeldInitiator field, to lookup to all annotated classes.
don't forget beans.xml in META-INF of your test classpath in order to setup the discovery-mode.
#Slf4j is a lombok annotation, you don't need it (unless you are already using Lombok)
Here the VideoGameRepository instance benefits injection as well, like in a classical CDI project.
Here is the code of the VideoGameFactory which gets a brand new instance of the entity class marked with #Dependent scope. This factory programmatically invokes the CDI current context.
public class VideoGameFactory
{
public static VideoGame newInstance()
{
// ask CDI for the instance, injecting required dependencies.
return CDI.current().select(VideoGame.class).get();
}
}
Alternately, you can have a look to Arquillian which can come with a full Java EE server in order to have all the needed dependencies.
I'm running into a problem with my Elasticsearch Document index creation failing on startup with "java.lang.IllegalArgumentException: can't add a _parent field that points to an already existing type, that isn't already a parent". I'm not sure if this is due to a version upgrade or b/c I am starting with a brand new Elasticsearch server install.
Contrived example that shows what I'm seeing:
// UserSearchResult.java
#Document(indexName = "hr_index", type = "user")
public class UserSearchResult implements Serializable {
...
#Field(type=FieldType.keyword)
#Parent(type="department")
private String departmentCode;
...
}
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department")
public class DepartmentSearchResult implements Serializable {
...
}
When I start my application I get that exception. If I check the ElasticSearch server, I see the "hr_index" index and the "department" mapping, but the "user" mapping is not created.
If I understand the error, it's because "department" is being created and then when Spring tries to create "user" with "department" as its parent, it doesn't like that, since department wasn't previously marked as a parent when it was created.
Is there some way (via annotation?) to denote DepartmentSearchResult as being a parent when it's created somehow?
Or, is it possible to give a hint to Spring Data Elasticsearch as to what order it should create the indices/mappings? I have seen some other posts (Spring Data Elasticsearch Parent/Child Document Repositories / Test execution error) but disabling auto creation and then manually creating it myself (either as part of my Spring codebase or external to the app) seems kind of "un-Spring-y" to me?
Or, is there some other approach I should be taking?
(This is a working Spring application that had been using Spring 4.2.1 and Spring Data Release Train Gosling, that I'm attempting to upgrade to use Spring 5.0.0 and Spring Data Release Train Kay. As part of this I am starting with a fresh Elasticsearch install, and so I'm not sure if this error is coming from the upgrade or just b/c the install is clean).
In the SD ES, issues related to the parent-child relationship at now really poorly developed.
The problem is most likely due to the fact that you are using a clean installation of Elasticsearch. Before the update, the problem did not arise, because mappings have already been created. For the solution, you can use elasticsearchTemplate, which is part of SD ES, and ApplicationListener. It's simple. Just 3 steps.
Drop index in ES (it only needs one time):
curl -XDELETE [ES_IP]:9200/hr_index
Tell SD ES not to create indices and mappings automatically
// UserSearchResult.java
#Document(indexName = "hr_index", type = "user", createIndex = false)
public class UserSearchResult implements Serializable {
...
#Field(type=FieldType.keyword)
#Parent(type="department")
private String departmentCode;
...
}
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department", createIndex = false)
public class DepartmentSearchResult implements Serializable {
...
}
Add a ApplicationListener:
#Component
public class ApplicationStartupListener implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
//Mapping for child must be created only if mapping for parents doesn't exist
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
elasticsearchTemplate.createIndex(DepartmentSearchResult.class);
try {
elasticsearchTemplate.getMapping(DepartmentSearchResult.class);
} catch (ElasticsearchException e) {
elasticsearchTemplate.putMapping(UserSearchResult.class);
elasticsearchTemplate.putMapping(DepartmentSearchResult.class);
}
}
}
P.S. Among other things, it is worth paying attention to the fact that with the release of ES 5.6, a process for removing types began. This inevitably entails the removal of the parent-child relationship. In one of the next releases of the SD ES, we will provide the opportunity to work with joins. Working with parent-child relationships is unlikely to be improved
According to the Flyway documentation it should be possible to use Flyway in a Grails 3 project out-of-the-box:
Grails 3.x is based on Spring Boot comes with out-of-the-box integration for Flyway.
All you need to do is add flyway-core to your build.gradle:
compile "org.flywaydb:flyway-core:4.1.1"
Spring Boot will then automatically autowire Flyway with its DataSource and invoke it on startup.
This doesn't work for me. Flyway does not kick in at application startup. In the logs I see some suspicious lines:
FlywayAutoConfiguration did not match
- #ConditionalOnClass found required class 'org.flywaydb.core.Flyway' (OnClassCondition)
- #ConditionalOnProperty (flyway.enabled) matched (OnPropertyCondition)
- #ConditionalOnBean (types: javax.sql.DataSource; SearchStrategy: all) did not find any beans (OnBeanCondition)
...
Exclusions:
org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
And this is how Spring's FlywayAutoConfiguration class looks like:
#Configuration
#ConditionalOnClass(Flyway.class)
#ConditionalOnBean(DataSource.class)
#ConditionalOnProperty(prefix = "flyway", name = "enabled", matchIfMissing = true)
#AutoConfigureAfter({ DataSourceAutoConfiguration.class,
HibernateJpaAutoConfiguration.class })
public class FlywayAutoConfiguration {
So it looks to me that it is not working because DataSourceAutoConfiguration is excluded from auto-config.
Is this analysis correct?
Why and where is DataSourceAutoConfiguration excluded? Supposedly somewhere deep inside of Grails, because I am not aware of any place in my code that could cause this.
How can I make the flyway integration into Grails work as promised by the Flyway docs? I know I can do it manually via resources.groovy (working code from my project, heavily inspired by Grails Flyway plugin code):
if (application.config.flyway.enabled != false) {
flyway(Flyway) { bean ->
bean.initMethod = 'migrate'
dataSource = ref('dataSource')
baselineOnMigrate = application.config.flyway.baselineOnMigrate
}
BeanDefinition sessionFactoryBeanDef = getBeanDefinition('sessionFactory')
if (sessionFactoryBeanDef) {
def dependsOnList = ['flyway'] as Set
if (sessionFactoryBeanDef.dependsOn?.length > 0) {
dependsOnList.addAll(sessionFactoryBeanDef.dependsOn)
}
sessionFactoryBeanDef.dependsOn = dependsOnList as String[]
}
}
but if possible I'd prefer the auto-config approach because it supports many flyway properties out-of-the-box and I can keep my resources.groovy tidy.
I am trying to update a Person entity in Neo4J Community edition 3.0.3 using SDN (spring-data-neo4j 4.1.2.RELEASE). I am seeing a kind of behavior while updating an entity.
I created a 'Person' entity of the name "person" and saved the same
in the database (line 8).
Changed a property (fullName) of the saved entity but
did not update that in the database (line 10).
Retrieved the same person from the database but by using a findBy method into another variable named "person2" line(12).
The changes made in variable "person" (in line 10) are lost.
Both person and person2 variables now
have the same property values.
1.Person person = new Person();
2. person.setUuid(UUID.randomUUID().toString());
3. person.setFullName("P1");
4. person.setEmail("PersonP1#gmail.com");
5. person.setUsername("PersonP1#gmail.com");
6. person.setPhone("123456789");
7. person.setDob(new Date());
8. personService.create(person);
9. System.out.println(person);
//Person{id=27, username='PersonP1#gmail.com', fullName='P1', email='PersonP1#gmail.com'}
10. person.setFullName("P2");
11. System.out.println(person);
//Person{id=27, username='PersonP1#gmail.com', fullName='P2', email='PersonP1#gmail.com'}
12.Person person2 = personService.findByEmail("PersonP1#gmail.com");
13. System.out.println(person2);
//Person{id=27, username='PersonP1#gmail.com', fullName='P1', email='PersonP1#gmail.com'}
14. System.out.println(person);
//Person{id=27, username='PersonP1#gmail.com', fullName='P1', email='PersonP1#gmail.com'}
Is this the default behavior of Neo4J SDN ?
Given below are the pom entries as well as the configuration used for Neo4J as advised in the comment
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-neo4j</artifactId>
<!-- <version>4.1.2.RELEASE</version> -->
</dependency>
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-ogm-core</artifactId>
<version>2.0.4</version>
</dependency>
public class MyNeo4jConfiguration extends Neo4jConfiguration {
#Bean
public org.neo4j.ogm.config.Configuration getConfiguration() {
org.neo4j.ogm.config.Configuration config = new org.neo4j.ogm.config.Configuration();
config
.driverConfiguration()
.setDriverClassName("org.neo4j.ogm.drivers.http.driver.HttpDriver")
.setCredentials("neo4j", "admin")
.setURI("http://localhost:7474");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), "au.threeevolutions.bezzur.domain" );
}
}
This behaviour has been fixed in the latest version of Neo4j OGM- 2.0.4
If you reload an entity that the session is already tracking, the entity properties will not be overwritten i.e. the properties in the session cache are returned, preserving your unpersisted modifications. Note however, that relationships and new nodes may be added to the subgraph in the session if these are pulled in by loading related nodes for example.