How to get and cast JNDI object correctly in Liberty - jndi

I have a big problem on getting the correct instance or at least casting the instance I got with JNDI-lookup to correct interface at Web Sphere Liberty (16.0.0.4, running on Java 7, though using Oracle Java 1.8.0_45 in the back, developing on Eclipse Neon.2).
When I start the server and the ear containing the EJB, I get the following notification into the log:
The server is binding the xxx.interfaces.MyLocal interface of the MyEJB enterprise bean in the xxx-ejb.jar module of the xxx-ear application. The binding location is: java:global/xxx/MyEJB!xxx.interfaces.MyLocal
Then I have a web application (ear) which has a service provider (with #Produces) for the previously started ejb-service, which will provide the JNDI resource as injectable (#Inject) for the rest of the application (a bit tricky thing, the main idea is to allow to change the lookup location from configuration file + do some other stuff also). It seems to work correctly for all it is supposed to, but when getting the JNDI-resource, it kind of works but not correctly.
If I put the ejb part as a dependency into my web-module, I can inject it directly (#Inject MyLocal myEjb;).
As the injected resource I get an object with the signature:
EJSMyLocal0SLMyEJB_a4549339#cc5d2cdd
with lookup I get an object with signature (at the same time as the inject):
EJSMyLocal0SLMyEJB_a4549339#cdda36a7
(Not the same instance afaik, but the "type" is correct?)
The injected resource is correctly (automatically of course) cast on 'MyLocal' interface and is ok.
When I try to check the resource got with JNDI, it does not qualify as an instance of 'MyLocal' nor as 'MyRemote'? Also the actual cast fails of course with ClassCastException. (MyRemote is basically the same as the MyLocal interface ... MyLocal extends MyRemote, both interfaces are accordingly annotated with #Local and #Remote)
The EJB looks like this at the time of testing...
#Stateless
#Named
#Default
#Local(MyLocal.class)
#Remote(MyRemote.class)
public class MyEJB implements MyLocal, MyRemote { ... }
I also tried to cast the JNDI resource like this.
InitialContext ic = new InitialContext();
Object lookedUpEjb = ic.lookup(lookup); // the 'java:global...' from log
MyRemote jndiEjb = (MyRemote) PortableRemoteObject.narrow(lookedUpEjb, MyRemote.class)
// Tried also casting/checking 'instanceof' to MyLocal...
No difference with that, the same ClassCastException occurs?!
I have the following features in server.xml
<featureManager>
<feature>javaee-7.0</feature>
<feature>ldapRegistry-3.0</feature>
<feature>localConnector-1.0</feature>
<feature>adminCenter-1.0</feature>
<feature>wsSecurity-1.1</feature>
<feature>ejbLite-3.2</feature>
<feature>ejbRemote-3.2</feature>
<feature>cdi-1.2</feature>
<feature>jpa-2.1</feature>
<feature>jsf-2.2</feature>
<feature>jaxrs-2.0</feature>
<feature>jaxws-2.2</feature>
</featureManager>
I found this documentation on the Liberty JNDI functionality:
https://www.ibm.com/support/knowledgecenter/SSAW57_liberty/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/twlp_ejb_remote.html
I can't see where I go wrong. How do I cast that object from JNDI lookup to MyLocal or MyRemote interfaces?
---- Note ----
Using the #EJB annotation is not an option (it works though), since it will be hard coded reference to the resource. I want it to be optional though, thus JNDI lookup. #EJB will cause the app to crash when the resource is not available.

The problem is that each application has a different ClassLoader and the object that has been bound into JNDI was loaded with the ClassLoader of the application that defined the EJB.
This should not be an issue for Remote EJB interfaces as the ORB should have taken care of this for you. On a remote call that returns such an object, the ORB will serialize the object (from the target ClassLoader) and then deserialize using the client ClassLoader. For a lookup like this, the PortableRemoteObject.narrow should also take care of this. The failure here appears to just be a bug in the ORB.
In order to support cross application access to Local EJB interfaces, either the Local EJB interface needs to be moved to a shared library, that is used by both applications, or both applications configured to use a single global ClassLoader. See this link for more information about using Local EJB interfaces across applications: Correct way to lookup local EJB in websphere - Getting ClassCastException (Note: this link is discussing traditional WebSphere, but the issue is the same with Liberty, as is the resolution to use a shared library for the interface).

Related

Java EE injection not working on Glassfish

(Please be kind, these are my first steps in Java EE).
I'm working with Netbeans 8.1, deploying an EJB module on a local Glassfish Server.
I have a glassfish-resource.xml with the following resource defined:
<jdbc-resource enabled="true" jndi-name="java:app/jdbc/myDataSource"
And I have a DAO class where I'm trying to inject that resource
#Named
public class SimpleDal {
#Resource(name = "jdbc/myDataSource", lookup = "java:app/jdbc/myDataSource")
private static DataSource ds ;
I have tried several ways to make this work but I always end up with NULL in the variable ds. I've been debugging and querying the Context, and I always end up with the name java:app/jdbc/myDataSource not found.
Maybe I'm not doing something right, this is my first steps on Java EE (I've always developed for PHP). Can somebody please direct me in order to avoid losing more time? Thanks
Note: I've add the #Named annotation to the SimpleDal class because I've read somewhere that in order to Injection to work, I have to be on a Bean.
So, after some time I finally found out that injection doesn't work reliably on static fields (at least on my setup). Changed the field to an instance field and It worked . Posting this answer for anyone who is facing the same situation

Grails's #Transactional will disable the #CompileStatic annotation

I add two annotations on the service method, after compiled, I found the method were compiled to a new class file, and I decompiled the generated class files and found the #CompileStatic were not work as wished.
Is is right or a bug of grails?
class FoobarService {
#grails.transaction.Transactional
#groovy.transform.CompileStatic
void foobar() {
....
}
}
The grails.transaction.Transactional annotation is a replacement for the traditional Spring org.springframework.transaction.annotation.Transactional annotation. It has the same attributes and features and works essentially the same, but it avoids an unfortunate side effect of using the Spring annotation.
The Spring annotation triggers the creation of a runtime proxy of the annotated class. Spring uses CGLIB to create a subclass of the target class (typically a Grails service) and an instance of the CGLIB proxy is registered as the Spring bean instead of registering a service instance directly. The proxy gets an instance of your service as a data variable.
Each method call is intercepted in the proxy, where it does whatever checks and/or setup is required based on the transaction settings, e.g. joining an existing transaction, creating a new one, throwing an exception because one isn't already running, etc. Once that's done, your real method is called.
But if you call another annotated method with different settings (e.g. the first method uses the default settings from #Transactional but the second should be run in a new separate transaction because it's annotated with #Transactional(propagation=REQUIRES_NEW)) then the second annotations settings will be ignored because you're "underneath" the proxy , inside the real instance of your service that the proxy is intercepting calls to. But it can't intercept direct calls like that.
The traditional workaround for this is to avoid direct calls and instead make the call on the proxy. You can't (at least not conveniently) inject the service bean into itself, but you can access the application context and access it that way. So the call that you would need in that situation would be something like
ctx.getBean('myService').otherMethod()
which works, but is pretty ugly.
The new Grails annotation works differently though. It triggers a reworking of the code via an AST transformation during compilation. A second method is created for each annotated method, and the code from the real method is moved inside there, in a GrailsTransactionTemplate that runs the code using the annotations settings. Once there, the code runs with the required transaction settings, but since every method is rewritten in this way, you don't have to worry about the proxy and where you're calling the methods from - there is no proxy.
Unfortunately there's a side effect that you're seeing - apparently the transformation happens in a way that isn't preserving the #CompileStatic annotation, so the code runs in dynamic mode. Sounds like a bug to me.

How can you log from a Neo4j Server Plugin?

I'm trying to debug a problem in the Neo4J Server plugin I'm writing. Is there a log I can output to? It's not obvious where or how to do this.
Good question. I think you could use Java Logging? That should be routed into the normal logging system.
Just inject org.neo4j.logging.Log in your class containing implementation of your Neo4j stored procedure.
public class YourProcedures {
#Context
public Transaction tx;
#Context
public Log log;
#Procedure(value = "yourProcedure", mode = Mode.READ)
public Stream<YourResult> yourProcedure(#Name("input") String input) {
log.debug("something");
}
}
Logs are then dumped into standard Neo4j log file.
The level is controlled by GraphDatabaseSettings.store_internal_log_level configuration.
The level can be also changed in runtime. Just inject DependencyResolver bean and define this admin procedure. (The framework has listener hooked to config change which reconfigures the internal logging framework. This is the simplest solution I could find.)
#Context
public DependencyResolver dependencyResolver;
#Procedure(value = "setLogLevel", mode = Mode.DBMS)
#Description("Runtime change of logging level")
public void setLogLevel(#Name("level") String level) {
Config config = dependencyResolver.resolveDependency(Config.class);
config.set(GraphDatabaseSettings.store_internal_log_level, Level.valueOf(level));
}
UPDATE:
This ^ solution works, however it is insufficient when one wants to use logging the way usual in Log4j - different loggers organized in hierarchy, each logger at its own level. The org.neo4j.logging.Log component is just a wrapper of Log4j logger for the GlobalProcedures class. This logger is only one of many loggers in hierarchy. In fact, the wrapper blocks access to richer features of underlying framework. (Unfortunately, to define multiple #Context Log fields in YourProcedures class distinguished by some annotation qualifying logger is also impossible because field injection is driven by Map<Class,instance> so there is only one possible instance to inject for any #Context-annotated field according to field type.)
Solution 1:
Use JUL as in accepted answer. The disadvantage is, JUL redirects log event to underlying Log4j anyway so if logger hierarchy is defined in JUL, Log4j must be set to lowest possible level in order to make JUL levels sensitive.
Solution 2:
Use Log4j directly (i.e. public static final Logger logger = LogManager.getLogger("some.identifier.in.hierarchy") in YourProcedures). There are some issues with redefining configuration programmatically though it is possible, I dropped this solution only because I had some trouble deploying this solution in non-docker environment.
Solution 3: (finally chosen)
I defined custom component LogWithHierarchy (it can be built from own ExtensionFactory loaded using ServiceLoaders - I was inspired in APOC config implementation). This component provides API of the form debug(loggerName, message), info(loggerName, message) etc. The component knows original Log, drills down into its log4j LoggerContext and redirects all logging requests to particular logger in this LoggerContext. Log messages finally end in debug.log. With this solution the original log4j logger hierarchy is fully utilized, levels can be changed dynamically in runtime (setLogLevel must be changed to operate on aforementioned LoggerContext) and still everything is implemented using standard Neo4j plugin support.

WCF Client calling a Java Web Service : XmlSerializer cannot find Derived types

This seems like a fundamental Web Services problem. However an elegant solution is elusive based on some research I have been able to do. I guess I am missing something here
I am using a WCF client to connect to a External web service of which I have no control. The external WS is java based. There are a bunch of assemblies which are provided to call the methods in web service. These assemblies have base classes and derived classes. The web service methods can take Base class as param whereas from the WCF Client I instantiate a Derived class and pass it to the method.
To simulate this scenario, I created a small project with one ClassLibrary which has a BaseClass and a DerivedClass with one method.
Then I create an asmx web service and add a HelloWorld method inside it. I add a reference to the ClassLibrary. This method takes a BaseClass type param.
Then I create a Service Reference to the ASMX web service. In the proxy class, I add a XmlSerializerFormatAttribute to the method if it is already not there.
From the WCF client, I call the ASMX web method
BaseClass bc = new Derived();
ServiceReference1.TestService ts = new WCFTest.ServiceReference1.TestService();
lbl1.Text = (c1.HelloWorld(bc));
The call fails with error
The type ClassLib.Derived was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically.
The only way I could call this web service method was by adding XmlInclude attribute to the BaseClass in the ClassLibrary.
In my scenario, this library is a dll provided by an external vendor. I cannot add attributes to its classes. I have looked a DataContractSerializer and KnownTypes and XmlSerializer ctor. However those solutions do not seem to be applicable in my scenario.
How can I make XMLSerializer see the Derived classes in the assemblies I have referencing in the WCF Client? Is there an elegant solution?
Thanks,
Hem
Including your own type mapping for an XmlSerializerOperationBehavior may just work, but I haven't tried this (see GetXmlMappings).
http://msdn.microsoft.com/en-us/library/system.servicemodel.description.xmlserializeroperationbehavior.aspx
Alternatively, forcing use of the DataContractSerializer via a DataContractSerializerOperationBehavior (as opposed to the XmlSerializerOperationBehavior it's using now) may work too, if you specify your own known types
http://msdn.microsoft.com/en-us/library/ms576751%28v=vs.85%29.aspx
Finally, as a last resort, you can force use of the DataContractSerializer using the DataContractSerializerOperationBehavior, then specify your own DataContractSurrogate to force use of the XmlSerializer where you can pass custom types to its constructor (which circumvents the requirement for the XmlInclude attribute).
http://msdn.microsoft.com/en-us/library/ms751540.aspx
Hope that helps.

Best way to use an IoC container for retrieving runtime settings

I have an C# dll project for which I have to store the runtime settings in an external XML file, and this dll will be used in an ASP.NET/ASP.NET MVC application for which I also have to store the runtime settings in a external file.
Which IoC container can be used to create an object with the runtime settings loaded from a specific external file (or app.config/web.config), and also works for web applications running in medium trust? Any howto/tutorial would be greatly appreciated.
So far I've found only this articles:
Use Castle Windsor to Read Your Config Files Automatically
Getting rid of strings (3): take your app settings to the next level
Update
I'm sending mails from my dll to a variable number of SMTP servers, based on the current record type. For type A I'm using a given SMTP server+port, for type B I'm using an alternate set of server+port values. Of course, I want to be able to modify those values after deployment, so I store them in a XML file.
If I'm storing the SMTP settings as a SMTPConfiguration class with 2 properties (SMTPServer as String and SMTPPort as Int32), is it possible to return from an IoC container the required object based on the given record type, and what is the best way to read the runtime settings in order to build the returning object?
Update2
Let's say I'm storing in the configuration file the following parameters: ASMTPServer, BSMTPServer, ASMTPPort, BSMTPPort.
I can use Castle DictionaryAdapter to read all those settings as properties of an AppConfiguration class.
What is the recommended method to specify that the required SMTPConfiguration class should use ASMTPServer and ASMTPPort values if I'm using a type A record as a parameter (and should use BSMTPServer and BSMTPPort values if I'm using a type B record as a parameter) ? Also, how can I specify that the AppConfiguration is required in this process?
Is there a pattern for initializing objects created wth a DI container
Windsor Config Parameters With Non-Primitive Types

Resources