I have setup Neo4j using the latest spring 1.5 release, spring-data-neo4j 4.2, with ogm drivers. The configuration is using embedded driver without URI (so impermanent database store)
Here is the spring #Configuration bean content:
#Bean
public org.neo4j.ogm.config.Configuration neo4jConfiguration() {
org.neo4j.ogm.config.Configuration configuration = new org.neo4j.ogm.config.Configuration();
configuration.driverConfiguration().setDriverClassName("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver");
// don't set the URI for embedded so we get an impermanent database
return configuration;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(
neo4jConfiguration(),
"xxx.yyy.springboot.neo4j.domain");
}
#Bean
public Neo4jTransactionManager transactionManager() {
return new Neo4jTransactionManager(getSessionFactory());
}
Trying to run built in procedure works fine:
/**
* Test we can call out to standard built-in procedures using cypher
*/
#Test
public void testNeo4jProcedureCalls() {
Session session = sessionFactory.openSession();
Result result = session.query("CALL dbms.procedures()", ImmutableMap.of());
assertThat(result).isNotNull();
List<Map<String, Object>> dataList = StreamSupport.stream(result.spliterator(), false)
.collect(Collectors.toList());
assertThat(dataList).isNotNull();
assertThat(dataList.size()).isGreaterThan(0);
}
Now I'd like to install and run apoc procedures, which I've added to the classpath:
/**
* Test we can call out to https://neo4j-contrib.github.io/neo4j-apoc-procedures
*/
#Test
public void testNeo4jApocProcedureCalls() {
Session session = sessionFactory.openSession();
Result result = session.query("CALL apoc.help(\"apoc\")", ImmutableMap.of());
assertThat(result).isNotNull();
List<Map<String, Object>> dataList = StreamSupport.stream(result.spliterator(), false)
.collect(Collectors.toList());
assertThat(dataList).isNotNull();
assertThat(dataList.size()).isGreaterThan(0);
}
However, the above fails with error Description: There is no procedure with the name 'apoc.help' registered for this database instance
I couldn't find any documentation for registering apoc procedures to run in embedded mode. Couldn't find any reference to registering procedures in the OGM documentation. Any tips or snippets would be appreciated.
Thanks for the pointer Michael. Your example is good for direct access, and this answer gave me the details needed to access through the neo4j-ogm layer:
Deploy a Procedure to Neo4J when using the embedded driver
so here's what I ended up with to register procedures through spring-data-neo4j
Note: isEmbedded() checks the neo4j driver property value contains 'embedded', and the Components.driver() call is static method provided by the ogm layer.
public void registerProcedures(List<Class<?>> toRegister) {
if(isEmbedded()) {
EmbeddedDriver embeddedDriver = (EmbeddedDriver) Components.driver();
GraphDatabaseService databaseService = embeddedDriver.getGraphDatabaseService();
Procedures procedures = ((GraphDatabaseAPI) databaseService).getDependencyResolver().resolveDependency(Procedures.class);
toRegister.forEach((proc) -> {
try {
procedures.registerProcedure(proc);
} catch (KernelException e) {
throw new RuntimeException("Error registering " + proc, e);
}
});
}
}
and add the call to register the procedures in the test when running with embedded:
#Test
public void testNeo4jApocProcedureCalls() {
registerProcedures(asList(
Help.class,
Json.class,
LoadJson.class,
Xml.class,
PathExplorer.class,
Meta.class)
);
Session session = sessionFactory.openSession();
Result result = session.query("CALL apoc.help('apoc')", ImmutableMap.of());
You have to register them manually with your GraphDatabaseService.
See here for an example: https://github.com/neo4j-contrib/rabbithole/blob/3.0/src/main/java/org/neo4j/community/console/Neo4jService.java#L55
With the release of neo4j 4.0 some things have changed (noticeably Procedures vs GlobalProcedures), and that's why I want to share my solution.
I wanted to setup embedded neo4j along with neo4j for test purposes and here are the results:
For some reason when including apoc from maven repository there were missing classes (e.g. apoc.util package contained only one class instead of ~20, also there were missing apoc.coll.Coll functions).
In order to fix that I had to use this answer: Compile Jar from Url in Gradle
and then in my dependencies block I've included
testImplementation(urlFile("https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/download/4.1.0.0/apoc-4.1.0.0-all.jar", "neo4j-apoc"))
Once you have all the classes register whatever you need, in my case I'm registering only Coll functions:
EmbeddedNeo4jDriver.kt
val managementService = org.neo4j.dbms.api.DatabaseManagementServiceBuilder(TestConfiguration.Neo4j.directory)
.setConfig(BoltConnector.enabled, true)
.setConfig(BoltConnector.listen_address, SocketAddress(TestConfiguration.Neo4j.hostname, TestConfiguration.Neo4j.port))
.build()
managementService.listDatabases().first()
.let(managementService::database)
.let { it as org.neo4j.kernel.internal.GraphDatabaseAPI }
.dependencyResolver
.resolveDependency(org.neo4j.kernel.api.procedure.GlobalProcedures::class.java)
.registerFunction(apoc.coll.Coll::class.java)
Related
My Spring Boot Application is secured by Spring Security OAuth2. The userdata is stored in a SQL-database. I followed here royclarkson's Oauth protected REST service. This project works with Spring Data JPA. This works fine.
https://github.com/royclarkson/spring-rest-service-oauth
But now I want to implement my Neo4J Configuration to get data from my Neo4J-Database via Neo4J-JDBC (JDBC-template). Here I followed this GitHub project:
https://github.com/neo4j-examples/movies-java-spring-boot-jdbc
As a standalone application it works, but if I put this two projects togehter, I get this Exception:
HibernateJpaAutoConfiguration.class]: Invocation of init method failed;
nested exception is org.hibernate.HibernateException:
Unable to determine Dialect to use [name=Neo4j, majorVersion=3];
user must register resolver or explicitly set 'hibernate.dialect'
My Neo4jConfig.java looks like this:
#Configuration
public class Neo4jConfig {
//NEO4J Server Implementation via JDBC
private static final String NEO4J_URL = System.getProperty("NEO4J_URL","jdbc:neo4j://localhost:7474");
private static final String NEO4J_USER = System.getProperty("NEO4J_USER","neo4j");
private static final String NEO4J_PASSWORD = System.getProperty("NEO4J_PASSWORD","neo4j");
#Bean
public DataSource dataSource() {
return new DriverManagerDataSource(NEO4J_URL, NEO4J_USER, NEO4J_PASSWORD);
}
public Neo4jConfig(){
}
public String getNeo4JURL(){
return NEO4J_URL;
}
}
TripController.java
import hello.data.Trip;
#RestController
public class TripController {
#Autowired
JdbcTemplate template;
public static final RowMapper<Trip> TRIP_ROW_MAPPER = new RowMapper<Trip>() {
public Trip mapRow(ResultSet rs, int rowNum) throws SQLException {
return new Trip(rs.getString("tripname"),rs.getInt("slots"), rs.getInt("to_date"), rs.getInt("from_date"));
}
};
String SEARCH_TRIPS_QUERY =
" MATCH (t:Trip)\n" +
" RETURN t.tripname as tripname, t.slots as slots, t.to_date as to_date, t.from_date as from_date";
#RequestMapping(path = "/alltrips", method = RequestMethod.GET)
public List<Trip> alltrips() {
return template.query(SEARCH_TRIPS_QUERY, TRIP_ROW_MAPPER);
}
}
I hope you guys understand my question. I know, I am a really newone at Spring, but I hope anyone can help me :)
This is happening because hibernate does not find any dialect for Neo4J as Neo4j is not RDBMS database and dialect is not provided by default. You can use Hibernate OGM (search and include it in pom.xml), and then use following configuration to configure Entitymanager and Transaction manager
#Configuration
#EnableJpaRepositories(basePackages = {
"your repository packages" }, entityManagerFactoryRef = "n4jEntityManager", transactionManagerRef = "n4jTxnManager")
public class DatabaseConfiguration {
#Bean(name = "n4jEntityManager")
public LocalContainerEntityManagerFactoryBean entityManager() {
Map<String, Object> properties = new HashMap<String, Object>();
properties.put("javax.persistence.transactionType", "resource_local");
properties.put("hibernate.ogm.datastore.provider","neo4j");
properties.put("hibernate.ogm.datastore.host","localhost");
properties.put("hibernate.ogm.datastore.port","7474");
properties.put("hibernate.ogm.datastore.database", "your database");
properties.put("hibernate.ogm.datastore.create_database", "true or false");
LocalContainerEntityManagerFactoryBean entityManager = new LocalContainerEntityManagerFactoryBean();
entityManager.setPackagesToScan("your domain packages");
entityManager.setPersistenceUnitName("n4jPU");
entityManager.setJpaPropertyMap(properties);
entityManager.setPersistenceProviderClass(HibernateOgmPersistence.class);
return entityManager;
}
#Bean(name = "n4jTxnManager")
public PlatformTransactionManager txnManager() {
JpaTransactionManager transactionManager = new JpaTransactionManager();
transactionManager.setEntityManagerFactory(mongoEntityManager().getObject());
return transactionManager;
}
}
But I suggest, to remove Hibernate altogether if you are not going to use RDBMS and will only be using Neo4j. Spring data has good support for NoSQL databases and Entities can be defined using annotations like #NodeEntity and #GraphId
Using the following dependencies (Gradle):
org.glassfish.jersey.containers:jersey-container-servlet:2.22.2
org.eclipse.jetty:jetty-servlet:9.3.2.v20150730
I have an embedded Jetty server, with a Jersey servlet container... something like this ...
package mypkg.rest.jersey;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.glassfish.jersey.server.ServerProperties;
import org.glassfish.jersey.servlet.ServletContainer;
import se.transmode.tnm.alarm.api.AlarmRetrieval;
import mypkg.rest.RestServer;
import mypkg.rest.jersey.serviceImpl.ModelAdapter;
public class JerseyBasedRestServer implements RestServer {
public static final int INITIALIZE_ON_USE = 0;
private Server server;
private final ServletContextHandler context;
private final ServletHolder servlet;
private final ModelAdapter modelAdapter;
public JerseyBasedRestServer(BusinessObjects businessObjects) {
this.modelAdapter = new ModelAdapter(businessObjects); //I want this instance to somehow be available for my ServletContainer to use.
context = new ServletContextHandler(ServletContextHandler.SESSIONS);
servlet = context.addServlet(ServletContainer.class, "/*");
servlet.setInitOrder(INITIALIZE_ON_USE);
servlet.setInitParameter(ServerProperties.PROVIDER_PACKAGES, "mypackage.jersey.generated.api.service");
servlet.setInitParameter(ServerProperties.MEDIA_TYPE_MAPPINGS, "json : application/json");
context.setContextPath("/");
}
private void startServlet() {
try {
servlet.start();
servlet.initialize();
} catch (Exception e) {
log.error("Failed to initialize servlet. {}", e.getMessage());
}
}
#Override
public void init(int port) {
server = new Server(port);
server.setHandler(context);
try {
server.start();
server.join();
startServlet();
} catch (Exception e) {
log.error("Failed to start jetty server for rest interface");
} finally {
server.destroy();
}
}
The Jersey Container will run server code and model generated using the Swagger code-gen tool
https://github.com/swagger-api/swagger-codegen#getting-started
which delivers the generated model, JacksonJsonProvider, and a RestApi class:
package mypackage.jersey.generated.api.service
Path("/")
public class RestApi {
private final RestApiService delegate = new RestApiServiceImpl(); //Integration point of the generated code
#GET
#Path("/list/")
#Consumes({ "application/json" })
#Produces({ "application/json" })
public Response retrieveAlarmList(#Context SecurityContext securityContext) throws NotFoundException {
return delegate.retrieveAlarmList(securityContext);
}
}
To integrate the generated code we are left to implement RestApiServiceImpl ourselves.
The ModelAdapter's job is to convert our business objects to the generated rest model.
So the question is how do I make the instance of the adapter of our business objects, in this case ModelAdapter, which lies outside the context of the Jersey servlet context, available to the RestApi class, or rather the RestApiServiceImpl?
I kind of understood from reading the past 24 hours that I need to use some sort of Context Dependency Injection either through Jetty, Jersey, or some other library (Weld seems to appear a lot), and have tried various combinations of #Inject, #Context, etc etc, but have come to the conclusion that I have no clue what I am actually doing... I'm not even sure I understand enough about the situation to phrase my question correctly.
More info can be made available on request.
Any help is appreciated.
EDIT: added a link here to https://github.com/englishbobster/JersetAndJetty
using #peeskillets suggestions, but still not working.
First thing you need to make DI work, is an AbstractBinder. This is where you will make your objects available to be injected.
class Binder extends AbstractBinder {
#Override
protected void configure() {
bind(modelAdapter).to(ModelAdapter.class);
}
}
Then you need to register the binder with Jersey. The easiest way is to register in Jersey's ResourceConfig. In your case, you are not using one. You are configuring everything in the "web.xml". For that, you should take a look at this post.
If you want to change your configuration to use a ResourceConfig, which personally I'd rather use, you can do this
package com.some.pkg;
public class JerseyConfig extends ResourceConfig {
public JerseyConfig() {
packages("mypackage.jersey.generated.api.service");
property(ServerProperties.MEDIA_TYPE_MAPPINGS, "json : application/json");
register(new Binder());
}
}
Then to configure it with Jetty, you can do
servlet.setInitParameter(ServletProperties.JAXRS_APPLICATION_CLASS,
"com.some.pkg.JerseyConfig");
Now you can get rid of those other two init-params, as you are configuring it inside the ResourceConfig.
Another way, without any init-params, is to do
ResourceConfig config = new JerseyConfig();
ServletHolder jerseyServlet = new ServletHolder(ServletContainer(config));
context.addServlet(jerseyServlet, "/*");
See full example of last code snippet, here.
Now you can just inject the ModelAdapter pretty much anywhere within Jersey
In a field
#Inject
private ModelAdapter adapter;
Or in a contructor
#Inject
public RestApi(ModelAdapter adapter) {
this.adapter = adapter;
}
Or method parameter
#GET
public Response get(#Context ModelAdapter adapter) {}
This is my scenario: we are building a routing system by using neo4j and the spatial plugin. We start from the OSM file and we read this file and import nodes and relationships in our graph (a custom graph model)
Now, if we don't use the batch inserter of neo4j, in order to import a compressed OSM file (with compressed dimension of around 140MB, and normal dimensions around 2GB) it takes around 3 days on a dedicated server with the following characteristics: CentOS 6.5 64bit, quad core, 8GB RAM; pease note that the most time is related to the Neo4J Nodes and relationships creation; in-fact if we read the same file without doing anything with neo4j, the file is read in around 7 minutes (i'm sure about this becouse in our process we first read the file in order to store the correct osm nodes ids and then we read again the file in order to create the neo4j graph)
Obviously we need to improve the import proces so we are trying to use the batchInserter. So far, so good (I need to check how much it will perform by using the batchInserter but I guess it will be faster); so the first thing I did was: let's try to use the batch inserter in a simple test case (very similar to our code, but without modifying our code directly)
I list my software versions:
Neo4j: 2.0.2
Neo4jSpatial: 0.13-neo4j-2.0.1
Neo4jGraphCollections: 0.7.1-neo4j-2.0.1
Osmosis: 0.43.1
Since I'm using osmosis in order to read the osm file, I wrote the following Sink implementation:
public class BatchInserterSinkTest implements Sink
{
public static final Map<String, String> NEO4J_CFG = new HashMap<String, String>();
private static File basePath = new File("/home/angelo/Scrivania/neo4j");
private static File dbPath = new File(basePath, "db");
private GraphDatabaseService graphDb;
private BatchInserter batchInserter;
// private BatchInserterIndexProvider batchIndexService;
private SpatialDatabaseService spatialDb;
private SimplePointLayer spl;
static
{
NEO4J_CFG.put( "neostore.nodestore.db.mapped_memory", "100M" );
NEO4J_CFG.put( "neostore.relationshipstore.db.mapped_memory", "300M" );
NEO4J_CFG.put( "neostore.propertystore.db.mapped_memory", "400M" );
NEO4J_CFG.put( "neostore.propertystore.db.strings.mapped_memory", "800M" );
NEO4J_CFG.put( "neostore.propertystore.db.arrays.mapped_memory", "10M" );
NEO4J_CFG.put( "dump_configuration", "true" );
}
#Override
public void initialize(Map<String, Object> arg0)
{
batchInserter = BatchInserters.inserter(dbPath.getAbsolutePath(), NEO4J_CFG);
graphDb = new SpatialBatchGraphDatabaseService(batchInserter);
spatialDb = new SpatialDatabaseService(graphDb);
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
//batchIndexService = new LuceneBatchInserterIndexProvider(batchInserter);
}
#Override
public void complete()
{
// TODO Auto-generated method stub
}
#Override
public void release()
{
// TODO Auto-generated method stub
}
#Override
public void process(EntityContainer ec)
{
Entity entity = ec.getEntity();
if (entity instanceof Node) {
Node osmNodo = (Node)entity;
org.neo4j.graphdb.Node graphNode = graphDb.createNode();
graphNode.setProperty("osmId", osmNodo.getId());
graphNode.setProperty("latitudine", osmNodo.getLatitude());
graphNode.setProperty("longitudine", osmNodo.getLongitude());
spl.add(graphNode);
} else if (entity instanceof Way) {
//do something with the way
} else if (entity instanceof Relation) {
//do something with the relation
}
}
}
Then I wrote the following test case:
public class BatchInserterTest
{
private static final Log logger = LogFactory.getLog(BatchInserterTest.class.getName());
#Test
public void batchInserter()
{
File file = new File("/home/angelo/Scrivania/MilanoPiccolo.osm");
try
{
boolean pbf = false;
CompressionMethod compression = CompressionMethod.None;
if (file.getName().endsWith(".pbf"))
{
pbf = true;
}
else if (file.getName().endsWith(".gz"))
{
compression = CompressionMethod.GZip;
}
else if (file.getName().endsWith(".bz2"))
{
compression = CompressionMethod.BZip2;
}
RunnableSource reader;
if (pbf)
{
reader = new crosby.binary.osmosis.OsmosisReader(new FileInputStream(file));
}
else
{
reader = new XmlReader(file, false, compression);
}
reader.setSink(new BatchInserterSinkTest());
Thread readerThread = new Thread(reader);
readerThread.start();
while (readerThread.isAlive())
{
try
{
readerThread.join();
}
catch (InterruptedException e)
{
/* do nothing */
}
}
}
catch (Exception e)
{
logger.error("Errore nella creazione di neo4j con batchInserter", e);
}
}
}
By executing this code, I get this exception:
Exception in thread "Thread-1" java.lang.ClassCastException: org.neo4j.unsafe.batchinsert.SpatialBatchGraphDatabaseService cannot be cast to org.neo4j.kernel.GraphDatabaseAPI
at org.neo4j.cypher.ExecutionEngine.<init>(ExecutionEngine.scala:113)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:53)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:43)
at org.neo4j.collections.graphdb.ReferenceNodes.getReferenceNode(ReferenceNodes.java:60)
at org.neo4j.gis.spatial.SpatialDatabaseService.getSpatialRoot(SpatialDatabaseService.java:76)
at org.neo4j.gis.spatial.SpatialDatabaseService.getLayer(SpatialDatabaseService.java:108)
at org.neo4j.gis.spatial.SpatialDatabaseService.containsLayer(SpatialDatabaseService.java:253)
at org.neo4j.gis.spatial.SpatialDatabaseService.createLayer(SpatialDatabaseService.java:282)
at org.neo4j.gis.spatial.SpatialDatabaseService.createSimplePointLayer(SpatialDatabaseService.java:266)
at it.eng.pinf.graph.batch.test.BatchInserterSinkTest.initialize(BatchInserterSinkTest.java:46)
at org.openstreetmap.osmosis.xml.v0_6.XmlReader.run(XmlReader.java:95)
at java.lang.Thread.run(Thread.java:744)
This is related to this code:
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
So now I'm wondering: how can I use the batchInserter for my case? I have to add the created nodes to the SimplePointLayer....so how can I create it by using the batchInserter graph db service?
Is there any little simple sample?
Any tip is really really appreciated
cheers
Angelo
The OSMImporter class in the code has an example of using the batch inserter to import OSM data. The main thing is that the batch inserter is not really supported by neo4j spatial, so you need to do a few things manually. If you look at the class OSMImporter.OSMBatchWriter, you will see how it does things. It is not using the SimplePointLayer at all, since that does not support the batch inserter. It is creating the graph structure it wants directly. The simple point layer is quite simple, certainly much simpler than the OSM model created by the code I'm referencing, so I think you should be able to write a batch-inserter compatible version yourself without too much trouble.
What I would recommend is that you create the layer and nodes using the batch inserter to create the correct graph structure, then switch to the normal embedded API and use that to iterate through the nodes and add them to the spatial index.
I used JNDI connection in my application and it is working. But I need to write Junits to test the connection. We dont use any spring framework. This is the method i wrote to get JNDI connection.
public Connection getConnection() throws SQLException {
DataSource ds = null;
InitialContext ic = null;
Connection con = null;
try {
ic = new InitialContext();
ds = (DataSource) ic.lookup("java:/DBs");
con = ds.getConnection();
return con;
} catch (Exception e) {
throw new SQLException(e);
}
}
You can make use of the SimpleNamingContextBuilder that comes with the spring-test library. You can use this even if you aren't using Spring as it isn't Spring specific.
Below is an example of setting up a JNDI connection in the #Before of the JUnit test.
package com.example;
import org.springframework.jdbc.datasource.DriverManagerDataSource;
import org.springframework.mock.jndi.SimpleNamingContextBuilder;
public class SomeTest
{
#Before
public void contextSetup () throws Exception
{
SimpleNamingContextBuilder builder = SimpleNamingContextBuilder.emptyActivatedContextBuilder();
DriverManagerDataSource dataSource = new DriverManagerDataSource("org.hsqldb.jdbcDriver", "jdbc:hsqldb:mem:testdb", "sa", "");
builder.bind("java:comp/env/jdbc/ds1", dataSource);
builder.bind("java:comp/env/jdbc/ds2", dataSource);
}
#Test
public void testSomething () throws Exception
{
/// test with JNDI
}
}
UPDATE: This solution also uses Spring's DriverManagerDataSource. If you want to use that you will also need the spring-jdbc library. But you don't have to use this, you can create any object you like and put it into the SimpleNamingContextBuilder. For example, a DBCP connection pool, a JavaMail Session, etc.
OK. After lot of searching i found a solution.And it is working for me. I want to share this to everybody. Hope this thing might help people who are having the same issue. Please add the below code.Add ojdb6.jar and naming-common-4.1.31.jar in your test libraries
#BeforeClass
public static void setUpClass() throws Exception {
try {
System.setProperty(Context.INITIAL_CONTEXT_FACTORY,
"org.apache.naming.java.javaURLContextFactory");
System.setProperty(Context.URL_PKG_PREFIXES,"org.apache.naming");
InitialContext ic = new InitialContext();
ic.createSubcontext("java:");
ic.createSubcontext("java:/comp");
ic.createSubcontext("java:/comp/env");
ic.createSubcontext("java:/comp/env/jdbc");
OracleConnectionPoolDataSource ocpds = new OracleConnectionPoolDataSource();
ocpds.setURL("your URL");
ocpds.setUser("your username");
ocpds.setPassword("your password");
ic.bind("java:/yourJNDIName", ocpds);
} catch (NamingException ex) {
Logger.getLogger(yourTesTClass.class.getName()).log(Level.SEVERE, null, ex);
}
}
If this is running outside the app server, then you'll likely need to supply parameters to the call for the InitialContext. But also realize that many DataSource implementations are not serializable so they won't work outside the container.
What you're writing is an integration test and it should be run in the container.
I'm trying to browse the JMS queue on a JBoss AS 7.1.1.Final using Hermes JMS, but I'm getting an "empty" JNDI tree. To investigate this, I wrote a simple program to dump the JNDI tree nodes from a JBoss server. The code is something like this:
public static void main(String[] args) throws Exception {
final Properties jndiProperties = getJboss7Properties();
// final Properties jndiProperties = getHornetQProperties();
// Dumps the initial context contents
InitialContext ctx = new InitialContext(jndiProperties);
listRootJndiContext(ctx);
// Simple lookup
System.out.println(ctx.lookup("java:jms/RemoteConnectionFactory")
.getClass().getName());
}
private static Properties getJboss7Properties() {
final Properties jndiProperties = new Properties();
jndiProperties.put(Context.INITIAL_CONTEXT_FACTORY,
"org.jboss.naming.remote.client.InitialContextFactory");
jndiProperties.put(Context.PROVIDER_URL, "remote://localhost:4447");
jndiProperties.put(Context.SECURITY_PRINCIPAL, "guest");
jndiProperties.put(Context.SECURITY_CREDENTIALS, "guest123");
return jndiProperties;
}
private static void listRootJndiContext(Context ctx) throws NamingException {
System.out.println("Listing root JNDI context:");
NamingEnumeration<NameClassPair> list = ctx.list("");
if (list.hasMore()) {
while (list.hasMore()) {
NameClassPair ncp = list.next();
System.out.println(ncp.getName() + " (" + ncp.getClassName() + ")");
}
} else {
System.out.println("Empty list!");
}
}
When calling ctx.list(""), the returned list is always empty, even though a ctx.lookup("java:jms/RemoteConnectionFactory") returns a JMS Connection Factory as expected.
I tried to run the exact same code against a stand alone HornetQ server (2.2.14.Final), changing the InitialContext properties to use the "old" jnp protocol and the JNDI tree nodes were dumped correctly.
I also tried to run the same code (except for invoking the default InitialContext() constructor) within the server (in a Servlet) and it also worked as expected (dumping the JNDI tree nodes).
Is there any permission to be configured on standard.xml or something like that?
Is this feature ("Remote JNDI browsing") implemented at all on JBoss AS 7.1.1.Final?