I have cypher queries that make use of APOC functions. It works without problem if running the app directly but I would also like to test those queries. I tried to use following approach but getting an exception Unknown function 'apoc.coll.toSet'
My sample test class:
public class ApocTest {
private static Neo4j neo4j;
private static Driver driver;
#BeforeAll
static void initializeNeo4j() {
// Make sure that the plugins folder is listed in -cp
Path pluginDirContainingApocJar = Paths.get("src/main/resources/neo4j-plugins/");
if (!Files.exists(pluginDirContainingApocJar)) {
throw new IllegalArgumentException("Invalid path to plugins directory");
}
neo4j = Neo4jBuilders
.newInProcessBuilder()
.withDisabledServer()
.withFixture("CREATE (p1:Person)-[:knows]->(p2:Person)-[:knows]->(p3:Person)")
.withConfig(GraphDatabaseSettings.plugin_dir, pluginDirContainingApocJar)
.withConfig(GraphDatabaseSettings.procedure_unrestricted, List.of("apoc.*"))
.build();
driver = GraphDatabase.driver(neo4j.boltURI(), AuthTokens.none());
}
#AfterAll
static void stopNeo4j() {
driver.close();
neo4j.close();
}
#Test
public void testApoc(){
String query = "MATCH path=()-[:knows*2]->()\n" +
"RETURN apoc.coll.toSet(nodes(path)) AS nodesSet";
List<Object> nodesSet = driver.session()
.beginTransaction()
.run(query)
.single()
.get("nodesSet")
.asList();
assertEquals(3, nodesSet.size());
}
}
Any idea how to fix that?
This sample project on the github
Versions:
neo4j-java-driver: 4.1.1
neo4j-harness 4.1.6
org.neo4j.procedure: 4.1.0.5
Update:
So I tried to update:
Path pluginDirContainingApocJar = new File(
ApocConfig.class.getProtectionDomain().getCodeSource().getLocation().toURI())
.getParentFile().toPath();
That means that I don't need to manipulate with apoc jars, right?
But I'm still getting error:
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.procedure.impl.GlobalProceduresRegistry#27dc627a' was successfully initialized, but failed to start. Please see the attached cause exception "Unable to set up injection for procedure `CypherProcedures`, the field `cypherProceduresHandler` has type `class apoc.custom.CypherProceduresHandler` which is not a known injectable component.".
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:463)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:110)
at org.neo4j.graphdb.facade.DatabaseManagementServiceFactory.startDatabaseServer(DatabaseManagementServiceFactory.java:189)
... 58 more
Caused by: org.neo4j.kernel.api.exceptions.ComponentInjectionException: Unable to set up injection for procedure `CypherProcedures`, the field `cypherProceduresHandler` has type `class apoc.custom.CypherProceduresHandler` which is not a known injectable component.
at org.neo4j.procedure.impl.FieldInjections.createInjector(FieldInjections.java:98)
at org.neo4j.procedure.impl.FieldInjections.setters(FieldInjections.java:81)
at org.neo4j.procedure.impl.ProcedureCompiler.compileProcedure(ProcedureCompiler.java:264)
at org.neo4j.procedure.impl.ProcedureCompiler.compileProcedure(ProcedureCompiler.java:226)
at org.neo4j.procedure.impl.ProcedureJarLoader.loadProcedures(ProcedureJarLoader.java:114)
at org.neo4j.procedure.impl.ProcedureJarLoader.loadProceduresFromDir(ProcedureJarLoader.java:85)
at org.neo4j.procedure.impl.GlobalProceduresRegistry.start(GlobalProceduresRegistry.java:342)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:442)
... 60 more
Update 2 - working on 4.0:
For some reason downgrade to Neo4j 4.0, same version as in recommended, was enough to make it working. Now I won't spend more time to try to run it on Neo4j 4.1/4.2.
My code
Probably the way the path to the plugin directory was created. There's an example from Michael Simons here that explains using the neo4j classloader: https://github.com/michael-simons/neo4j-examples-and-tips/blob/master/examples/testing-ogm-against-embedded-with-apoc/src/test/java/org/neo4j/tips/testing/testing_ogm_against_embedded_with_apoc/ApplicationTests.java#L53
Related
I am struggling with org.testcontainers:oracle-xe:1.14.3.
I am trying to run a test intended to verify schema creation and migration, however I'm getting stuck at the InitScript, when trying to initialize the users for the test with the users 'sys as sysdba'.
#Before
public void setUp() {
oracleContainer = new OracleContainer("oracleinanutshell/oracle-xe-11g")
.withUsername("sys as sysdba")
.withInitScript("oracle-initscript.sql");
oracleContainer.start();
}
The above seems to be able to connect, but execution of the init script fails with a
ORA-01109: database not open
Using the 'system' user in the above does not provide the InitScript connection with sysdba privileges, but result in an open database.
I'm looking for a solution that will allow me to initialize multiple users prior to a test. This initialization has grants that requires sysdba privileges. The test, in which some SQL scripts are executed, requires that both users are created in the database and can connect to the database.
In my case I'm using
oracleContainer = new OracleContainer("gvenzl/oracle-xe:18.4.0-slim")
.withUsername("test")
.withPassword("test")
.addEnv("ORACLE_PASSWORD", "s") // Sys password is required
.withCopyFileToContainer(MountableFile.forHostPath("oracle-initscript.sql"), "/container-entrypoint-initdb.d/init.sql")
gvenzl/oracle-xe is the default image used by the org.testcontainers.oracle-xe library.
The documentation for this image describes how to call initialization SQL on DB start and it works great.
Hard to say what is the issue but here are some tricks:
maybe "sys as sysdba" is not valid in your code, documentation is not clear about the usage
maybe withLogConsumer can provide some clues what's wrong
I recommend the image gvenzl/oracle-xe,
in some cases withInitScript may not work properly.
it is useful to test the init script on the container started manually
I finished on end with this approach:
as sys admin created two different schema/user)
#SpringBootTest(classes = Main.class)
#Import(DbConfiguration.class)
#Testcontainers
public class ServiceIntegrationTest {
#Container
public static final OracleContainer oracleContainer =
new OracleContainer("gvenzl/oracle-xe:21-slim-faststart");
}
import static com.integrationtests.local_test.service.IntegrationTest.oracleContainer;
#TestConfiguration
public class DbConfiguration {
static final String DEFAULT_SYS_USER = "sys as sysdba";
private static final String ENTITY_MANAGER_FACTORY = "entityManagerFactory";
#Bean
public DataSource getDataSource() {
DataSourceBuilder<?> dataSourceBuilder = DataSourceBuilder.create();
dataSourceBuilder.driverClassName("oracle.jdbc.OracleDriver");
dataSourceBuilder.url(oracleContainer.getJdbcUrl());
dataSourceBuilder.username(DEFAULT_SYS_USER);
dataSourceBuilder.password(oracleContainer.getPassword());
return dataSourceBuilder.build();
}
Also in application.yaml put scripts
spring:
datasource:
initialization-mode: always
schema:
- classpath:/sql/init_schemas/USER_ONE.sql
- classpath:/sql/init_schemas/USER_TWOT.sql
I'm using a Neo4j Java application with the APOC procedures: xml import and mergeNodes.
The xml import is fine, but I can't tell the same for mergeNodes function.
I know how to register APOC procedure, so here the code:
private static void registerApocProcedure(GraphDatabaseService graphDB) throws IllegalArgumentException {
//Register APOC procedures
Procedures procedures = ((GraphDatabaseAPI) graphDB).getDependencyResolver().resolveDependency(Procedures.class);
List<Class<?>> apocProcedures = Arrays.asList(Xml.class, Merge.class, RefactorConfig.class, RefactorResult.class, RelationshipRefactorResult.class, NodeRefactorResult.class);
apocProcedures.forEach((proc) -> {
try {
procedures.registerProcedure(proc);
} catch (KernelException e) {
throw new RuntimeException("Error registering "+proc,e);
}
});
}
As you can see, I also included some APOC procedures that have similar name to apoc.refactor.mergeNodes, but nothing happens. Probably I'm typing the wrong name, becuase this APOC procedures is built-in so I'm sure it's already present in the library, also because it is documented here
So, how can I call this function?
SOLUTION: GraphRefactoring.class
I'm running into a problem with my Elasticsearch Document index creation failing on startup with "java.lang.IllegalArgumentException: can't add a _parent field that points to an already existing type, that isn't already a parent". I'm not sure if this is due to a version upgrade or b/c I am starting with a brand new Elasticsearch server install.
Contrived example that shows what I'm seeing:
// UserSearchResult.java
#Document(indexName = "hr_index", type = "user")
public class UserSearchResult implements Serializable {
...
#Field(type=FieldType.keyword)
#Parent(type="department")
private String departmentCode;
...
}
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department")
public class DepartmentSearchResult implements Serializable {
...
}
When I start my application I get that exception. If I check the ElasticSearch server, I see the "hr_index" index and the "department" mapping, but the "user" mapping is not created.
If I understand the error, it's because "department" is being created and then when Spring tries to create "user" with "department" as its parent, it doesn't like that, since department wasn't previously marked as a parent when it was created.
Is there some way (via annotation?) to denote DepartmentSearchResult as being a parent when it's created somehow?
Or, is it possible to give a hint to Spring Data Elasticsearch as to what order it should create the indices/mappings? I have seen some other posts (Spring Data Elasticsearch Parent/Child Document Repositories / Test execution error) but disabling auto creation and then manually creating it myself (either as part of my Spring codebase or external to the app) seems kind of "un-Spring-y" to me?
Or, is there some other approach I should be taking?
(This is a working Spring application that had been using Spring 4.2.1 and Spring Data Release Train Gosling, that I'm attempting to upgrade to use Spring 5.0.0 and Spring Data Release Train Kay. As part of this I am starting with a fresh Elasticsearch install, and so I'm not sure if this error is coming from the upgrade or just b/c the install is clean).
In the SD ES, issues related to the parent-child relationship at now really poorly developed.
The problem is most likely due to the fact that you are using a clean installation of Elasticsearch. Before the update, the problem did not arise, because mappings have already been created. For the solution, you can use elasticsearchTemplate, which is part of SD ES, and ApplicationListener. It's simple. Just 3 steps.
Drop index in ES (it only needs one time):
curl -XDELETE [ES_IP]:9200/hr_index
Tell SD ES not to create indices and mappings automatically
// UserSearchResult.java
#Document(indexName = "hr_index", type = "user", createIndex = false)
public class UserSearchResult implements Serializable {
...
#Field(type=FieldType.keyword)
#Parent(type="department")
private String departmentCode;
...
}
// DepartmentSearchResult.java
#Document(indexName = "hr_index", type = "department", createIndex = false)
public class DepartmentSearchResult implements Serializable {
...
}
Add a ApplicationListener:
#Component
public class ApplicationStartupListener implements ApplicationListener<ContextRefreshedEvent> {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
//Mapping for child must be created only if mapping for parents doesn't exist
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
elasticsearchTemplate.createIndex(DepartmentSearchResult.class);
try {
elasticsearchTemplate.getMapping(DepartmentSearchResult.class);
} catch (ElasticsearchException e) {
elasticsearchTemplate.putMapping(UserSearchResult.class);
elasticsearchTemplate.putMapping(DepartmentSearchResult.class);
}
}
}
P.S. Among other things, it is worth paying attention to the fact that with the release of ES 5.6, a process for removing types began. This inevitably entails the removal of the parent-child relationship. In one of the next releases of the SD ES, we will provide the opportunity to work with joins. Working with parent-child relationships is unlikely to be improved
I am new to neo4j and neo4j spatial. I want to import an OSM-File which i exported from https://www.openstreetmap.org/export. For this i use the following code. All examples i could found does not work for me or was incomplete. So i try to get a version which compiles:
try {
OSMImporter importer = new OSMImporter("osmGauKlein");
Map<String, String> config = new HashMap<String, String>();
config.put("neostore.nodestore.db.mapped_memory", "90M" );
config.put("dump_configuration", "true");
config.put("use_memory_mapped_buffers", "true");
BatchInserter batchInserter = BatchInserters.inserter(dbf, config);
importer.importFile(batchInserter, "osm/map.osm", false);
batchInserter.shutdown();
GraphDatabaseService dbs = dbFactory.newEmbeddedDatabase(dbf);
importer.reIndex(dbs, 10000);
dbs.shutdown();
} catch (IOException | XMLStreamException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
After doing this i can see the nodes in the neo4j browser. But after importing some of the spatial procedures doesn't work anymore. For example:
call spatial.layers
It just returns this error:
Failed to invoke procedure `spatial.layers`: Caused by: java.lang.NoClassDefFoundError: org/geotools/filter/text/cql2/CQLException
Some other procedures are still working like
call spatial.layerTypes
What is the problem here?
When trying to import the OSM directly with cyper i used:
call spatial.importOSM("C:/Users/Steffen/workspaceEclipse/NDBS Neo4J/osm/map.osm")
But this leads to an other error
Failed to invoke procedure `spatial.importOSM`: Caused by: java.util.NoSuchElementException: More than one element in org.neo4j.kernel.impl.coreapi.LegacyIndexProxy$1#7ce40edf. First element is 'Node[50]' and the second element is 'Node[1206]'
Some infos:
Windows 10 64 bit,
neo4j 3.2.1,
neo4j-spatial-0.24-neo4j-3.1.1-server-plugin
I have a service that implements InitializingBean and DisposableBean
class MyService implements InitializingBean, DisposableBean {
static transactional = false
def grailsApplication
#Override
void afterPropertiesSet() {
System.setProperty("JMS_TIMEOUT", grailsApplication.config.JMS_TIMEOUT);
// code performing a JDNI lookup
}
}
enter code here
The system properties are used to initialize some other components in the service. I have added the configs in Config.groovy.
grails.config.locations = [ "file:${basedir}/grails-app/conf/myconfig.properties" ]
This works fine when running the application. However I'm writing an integration test in test/integration that injects the service.
class MyServiceIntegrationTests extends GrailsUnitTestCase {
def myService
void testMyService() {
}
}
When running the test I get a StackTrace with the folllowing root cause:
Caused by: javax.naming.NameNotFoundException: Name [ConnectionFactory] not bound; 0 bindings: []
at javax.naming.InitialContext.lookup(InitialContext.java:354)
at com.ubs.ecredit.common.jmsclient.DefaultConnector.<init>(DefaultConnector.java:36)
Seems that the Config could not be loaded or are different in the Integration Tests. Any idea how I can change the config or code, so that these properties are also set for my integration test, before the service is instantiated?
UPDATE:
It turned out the cause was not the configurations but a JDNI lookup and a bug in Grails.
See: http://jira.grails.org/browse/GRAILS-5726
${basedir} gets different paths in different environments. As an alternative, you can use PropertiesLoaderUtils.loadProperties to load your customized configurations:
import org.springframework.core.io.support.PropertiesLoaderUtils
import org.springframework.core.io.ClassPathResource
....
void afterPropertiesSet() {
def configProperties = PropertiesLoaderUtils.loadProperties(
new ClassPathResource("myconfig.properties"))
System.setProperty("JMS_TIMEOUT", configProperties.getProperty("JMS_TIMEOUT"))
....
}
It turned out the cause was a JNDI lookup used by a library method, I have not shown in afterPropertiesSet() in my Service, which can be seen in the StackTrace.
After doing some research I found that this was a bug in Grails: http://jira.grails.org/browse/GRAILS-5726
Adding the mentioned workaround, resolved the issue for now.