EJB achieved many improvements in 3.x versions, Spring is also commonly used and version 3 is a good alternative.
There are many articles on web, but no exact comparison about ejb3x versus spring3x.. Do you have any ideas about them, in real world examples which one is better at which conditions?
For example, we want to separate db and server, which means our application will be on a server, our database will be in another server.. EJB remoting vs Cluster4Spring etc ?
Doing everyting #Annotation is always good? configuration never needed?
For your use case where the application runs on one server and the database runs on another, the choice between EJB and Spring is irrelevant. Every platforms supports this, be it a Java SE application, a simple Servlet container like Tomcat or Jetty, PHP, Ruby on Rails, or whatever.
You don't need any kind of explicit remoting for that. You just define a datasource, provide the URL where your DB server lives and that's it.
That said, both EJB and Spring Beans do make it easier to work with datasources. They both help you defining a datasource, injecting it in beans and managing transactions associated with them.
Of the two, EJB (and Java EE in general) is more lightweight and adheres more to the convention over configuration principle. Spring requires more verbosity to get the same things and depends a lot on XML files which can quickly become very big and unwieldy. The flip side of the coin is that Spring can be less magical and you might feel more in control after having everything you want spelled out.
Another issue is the way EJB and Spring are developed.
EJB is free (as in free beer), open-source and non-proprietary. There are implementations of EJB being made by non profit organizations (Apache), open source companies (Redhat/JBoss) and deeply commercial closed source enterprises (IBM). I personally would avoid the latter, but to each his own.
Spring on the other hand is free and open-source, but strongly proprietary. There is only one company making Spring and that's Springsource. If you don't agree with Rod, then tough luck for you. This is not necessarily a bad thing, but a difference you might want to be aware of.
Doing everyting #Annotation is always good? configuration never needed?
It's an endless debate really. Some argue that XML is hard to maintain, others argue that annotations pollute an otherwise pure POJO model.
I think that annotating a bean as being an EJB stateless bean (#Stateless) or a JPA entity (#Entity) is more cleanly done using annotations. Same goes for the #EJB or #Inject dependency injections. On the other hand, I prefer JPQL named queries to be in XML files instead of annotations, and injections that represent pure configuration data (like a max value for something) to be in XML as well.
In Java EE, every annotation can also be specified in XML. If both the annotation and the XML equivalent are present, the XML overrules the annotation. This makes it really convenient to start with an annotation for the default case, but override it later via XML for a specific use case.
The current preference in Java EE seems to be more towards (simple) annotations combined with a large amount of convention over configuration.
The real question you should be asking is CDI/EJB or Spring
It's often not Spring vs EJB, but Spring vs Java EE. EJB itself compares to Spring Beans. Both of them are a kind of managed beans running inside a container (the EJB container resp. Spring container).
Overall the two technologies are rather similar. Reza Rahman did a great comparison between the two a while back.
EJB's are more advantageous because of standardization. If you are working with a lightweight application I think going with Spring is fine but if you expect that your application will be big and will require lots of memory access and data connections access you may consider starting your development with EJBs. The main reason being clustering and load balancing are built into the EJB framework.
In an EJB environment, when an EAR ('E'nterprise 'AR'chive) is deployed, it may be deployed with multiple EJBs beans that each could have a specific purpose. Let say you wrote a bean for user management and another bean for product management. Maybe one day you find that your user services way exceed your products access services, and you want to move your user bean to a different server on a different machine. This can actually be done in runtime without altering your code. Beans can be moved between servers and databases, to accomodate clustering and load/data balancing without affecting your developers or your users because most of it can be configured at the deployment level.
Another reason for supporting a standard is knowing that most large third party vendors will likely support it resulting in less issues when integrating with new standard/service/technology - and let's face it, those come out like new flavours of ice-cream. And if it is in a public specification new start-up companies or kind developers can create an open-source version.
http://www.onjava.com/pub/a/onjava/2005/06/29/spring-ejb3.html
It is most unfortunate that even the most intelligent designers or programmers cannot predict which of their features may or may not be embraced by the development community which is the main reason software becomes bloated... Java EE is definitely that!
Choose one or the other, but not both.
My personal preference is Spring. I've used on projects with great success for the past six years. It's as solid as any software out there.
Spring can work with EJBs if you choose to have them in your app, but I don't believe the reverse is true.
I would recommend separate physical machines for web, app, and database servers if you can afford it.
Spring can work with several remoting options, including SOAP and REST web services. A comparison of Spring beans with EJB is beyond the scope of this question. I don't see what it has to do with your implementation. If you use Spring POJO services they're in-memory rather than requiring another network hop like remote EJBs. Think of Fowler's Law of Distributed Objects: "Don't". Only introduce latency with good reason.
I'd mention unit testing here.
In common web application (controller->service->data->...->view) EJB and Spring both provide similar result, but spring offers easier testing.
In my humble experience the way you develop is different in couple of aspects:
Unit test (spring wins). In spring its done pretty stright forward, while in ejb you have to use Arqullian with ShrinkWrap (sic!) which is slow to run on every build.
Persistence (ejb wins). In spring there is struggle around it, i.e. google "how to autowire persistence in entity listener" http://bit.ly/1P6u5WO
Configuration (ejb wins). As newbie coming to spring from ejb I was surprised by swarm of annotations and .xml files.
EJB 3.1 is the best while being the standard for Java 6 EE applications.
Spring still does not support Java 6 CDI(weld) also still depends a lot on XML configuration. EJB 3.1 is powerful and smart.
I think that Spring 3.1 doesn't need any XML configuration. You have the option to use annotations for configuration.
Related
We are in the process of disentangling a classic legacy monolithic EAR-packaged Java EE application. Our (most complex) component wiring pattern is as follows: component A 'requires' interface X, whilst components B and C (... N) each 'provide' interface X. Our requirement is to package and deploy A, B, C and X separately and independently in order to minimize downtime and minimize business impact.
We therefore require the necessary robustness to allow providers (B,C) of interfaces to be removed and added (redeployed), at runtime, without requiring a redeployment of the consumers (A) of the interface, nor a restart of the server. The solution will run on Wildfly 8, but can make use or other technologies as long as they work on Wildfly 8.
We've implemented a POC using JBoss-OSGI and Weld-OSGI which fulfilled all of our requirements, and offered us an excellent migration path as well. However, in Wildfly 8 Alpha 3, JBoss-OSGI was removed from the default distribution. This made us think we should explore alternatives that are more in line with the thinking of the people behind Wildfly.
The question therefore is, on Wildfly 8, what is the alternative to OSGI for inter-module service injection that would meet our requirements?
For the sake of budgets, simplicity, performance overheads and company policies, we've had to eliminate the following:
1. Remote EJB's
2. Web Services
3. JSon/Rest
4. SCA
Please note that this is not a request for a debate on the viability of OSGI nor for an evaluation or comparison of different solutions. I am simply looking for any solution(s) that would meet our criteria and is NOT based on OSGI.
Since you're asking about the thinking of the people behind WildFly, I will refer you to the following mail-list message. It was posted to the Jigsaw development list by David Lloyd, who is (I believe) the designer of JBoss Modules on which WildFly is based. The context was a discussion about the introduction of a service model into Jigsaw: http://mail.openjdk.java.net/pipermail/jigsaw-dev/2012-February/002161.html
What David seems to be saying is either that the idea of services itself flawed – i.e. you don't need them! – or that the requirement is already sufficiently solved by the ServiceLoader API which was introduced in Java 6.
However, ServiceLoader is known not to work on module systems that use classloader isolation, which includes both OSGi and JBoss Modules. This is because ServiceLoader uses classpath scanning, and in a module system there is no "classpath". In OSGi we have specced a way of adapting ServiceLoader (though it's yucky and requires bytecode munging). Perhaps JBoss Modules also has a way of handling this, but I couldn't find anything from a quick scan of their docs.
Anyway as I said in my comment above, I'm puzzled about your motivation. You clearly get benefits from the service model provided by OSGi, and JBoss-OSGi is still available and supported by Red Hat... so why not continue to use it? Especially if there is nothing clearly provided by WildFly out-of-the-box that does what you want.
Apache Felix can be embedded in your application server as 'OSGI host'. Then you can create plugin mechanism for the required system. All of your services can be implemented as 'bundles'. OSGI host in the server can find the bundles in a deployment folder, and installs/starts them. You can then enable your web service,rest and other services without restarting the application server.
Where I work, we had to pick something in order to continue the project when JBoss-OSGi was declared dead. We went with JBoss Modules + EJB approach, since they're actually supported by Red Hat. JBoss Modules is used for static module dependencies, and EJB for runtime injection of services.
We don't use remote EJB but EJB 3.x local EJBs, and that wasn't thrown away in your list, so I guess it's ok to offer this.
Is it possible to use JSF 2.0 (PrimeFaces for example) as view layer for Play Framework? I'd like to combine elastic hot redeployment of Play with easy component driven JSF developement (instead of MVC and template driven GUI design).
I think all I need is -
1. Run FAces Servlet (javax.faces.webapp.FacesServlet) and maybe some other servlet
2. Tweak el-resolver in faces-config.xml just like org.springframework.web.jsf.el.SpringBeanFacesELResolver does.
Have anyone did something like this? I'm new in Play Framework. I use JSF + Spring + JPA now.
JSF is fully based on the stateful aspect of Java EE web stack and on the servlet API!
Play is just a full stateless framework and doesn't use servlet API at all!
So the answer is "No you shouldn't use JSF as the view layer of Play". I use "shouldn't" instead of "can't" because everything is possible but it was be really bad thing!
Nevertheless, you really should think about leaving JSF after wanting to leave MVC. If Play! exists, it's not only because of Rails/Django/Symfony are good, it's also because JSF-like frameworks aren't good, efficient and viable solutions for many reasons you can find everywhere on the web or even in your own experience maybe.
I would advise you just to give a try to Play+JPA (or even something else to replace JPA such as Siena) for real. Don't begin by mixing Java EE stuff with it, use Play 100% to see how it performs. If you need to use Spring with Play, there is no problem but it's not required in many cases. You will discover how easy and efficient it is to build apps from the smallest to the biggest enterprise ones. In my experience, since I use Play, I find this framework promises things and keeps them which is very rare in this world!
Have fun!
By default, no this is not possible.
Play does not conform to the J2EE specification, and as such does not implement the Servlet specification.
However, it may be possible, with a fair amount of effort. Play developers have already created a ServletWrapper that allows Play to be deployed to standard servlet containers (like JBoss and Tomcat etc), so they have shown that you can integrate with J2EE technology, if you want to spend the time and effort to write your own Plugin that overwrites the default nature of Play.
I wouldn't bother though. Just take a look at the template engine that comes with Play. It is very good, and I have not missed JSPs at all since using Groovy.
You can use JSF2.2+Primefaces and Spring + AKKA framework which is better and faster than play.
I started digging into the liferay 6.x ServiceBuilder framework and really liked its code generation approach. A simple service.xml file can generate ready to use powerful services without even writing a single line of code.
I also tried looking into AndroMDA which can generate similar services from the UML model, which sounds even more interesting since it will link my business model directly without me needing to learn a new xml config for service.xml (in case of liferay ServiceBuilder)
now I am in the process of deciding which tool should I use. Based on your experience with any of these tools Please let me know what are Pros/Cons of using any of this library,
I am interested to know these aspects, along with your own thoughts
Which is better to keep my development more productive in long term.
If I use ServiceBuilder will I be able to use the services outside portal env (lets say running same service from a non-portal app server.
Is UML driven approach always good or there are some practical cons/challenges of it.
Do you know of any other code generation library which is better than these two for liferay 6.x development? I also checked these SO Threads
Do You Use Code Generators
Java Code Generation
Following few problems I have experienced with Servicebuilder (I am using liferay 5.2.3) :
Not able to make use ORM framework. There is no way to generate
relations among objects. Because of this I am effectively working
just object mapper. It is not generating onetomany kind of relations
Can not use basic object oriented things like inheritance with domain or services
It is quite hard to write unit test cases
I still didn't understand what is the need of complex domain structure
I feel the code it is generating can be quickly written using an IDE
But definitely it has its own benefits like Egar said, it is specifically made for Liferay. So it can quickly generate everything that is needed for liferay. I heard in latest versions of liferay few of above problems are fixed.
Overall it depends on your requirement. If you need more control over your ORM layer and you have complex business logic which needs quite a lot of unit testing, go for normal spring services which can be exposed as webservices or REST services to your portlets.
Otherwise service builder is also good for simple portlets. Other approach could be using both. All complex services as a separate project and simple ones with service builder.
There is an important fact that you should be aware of. ServiceBuilder has been used to help building the portal itself and it is tightly integrated into it. You cannot use it outside of Liferay...I mean it probably could be taken and modified for general usage, but I doubt it would make sense.
Most importantly because Portal and each plugin that you are developing have their own web application context in a servlet container - each has its own classloader. Plugins are using Portal classloader and portal services, etc. etc.
Simply put, ServiceBuilder generated code and spring context can exist only if there is a webapp/ROOT/ which is Liferay Portal with portal classloader etc.
AndroMDA is a MDA framework for general usage. I don't know it much, so that I'm rather not going to make comparisons. The power of ServiceBuilder is that it is not a framework for general usage - the more powerful it is for liferay plugin development.
XML seems to be the language of the day, but it's not type safe (without an external tool to detect problems) and you end up doing logic in XML. Why not just do it in the same language as the rest of the project. If it's java you could just build a config jar and put it on the classpath.
I must be missing something deep.
The main downside to configuration DI in code is that you force a recompilation in order to change your configuration. By using external files, reconfiguring becomes a runtime change. XML files also provide extra separation between your code and configuration, which many people value highly.
This can make it easier for testing, maintainability, updating on remote systems, etc. However, with many languages, you can use dynamic loading of the code in question and avoid some of the downsides, in which case the advantages diminish.
Martin Fowler covered this decision pretty well here:
http://martinfowler.com/articles/injection.html
Avoiding the urge to plagiarize... just go read the section "Code or configuration files".
There's nothing intrinsically wrong with doing the configuration in code, it's just that the tendency is to use XML to provide some separation.
There's a widespread belief that somehow having your configuration in XML protects you from having to rebuild after a change. The reality in my experience is that you need to repackage and redeploy the application to deliver the changed XML files (in the case of web development anyway), so you could just as easily change some Java "configuration" files instead. Yo could just drop the XML files onto the web server and refresh, but in the environment I work, audit would have a fit if we did.
The main thing that using XML configuration achieves in my opinion is forcing developers to think about dependency injection and separation of concerns. in Spring (amongst others), it also provides a convenient hook to hang your AOP proxies and suchlike. Both of these can be achieved in Java configuration, it is just less obvious where the lines are drawn and the tendency may be to reintroduce direct dependencies and spaghetti code.
For information, there is a Spring project to allow you to do the configuration in code.
The Spring Java Configuration project (JavaConfig for short) provides a type-safe, pure-Java option for configuring the Spring IoC container. While XML is a widely-used configuration approach, Spring's versatility and metadata-based internal handling of bean definitions means alternatives to XML config are easy to implement.
In my experience, close communication between the development team and the infrastructure team can be fostered by releasing frequently. The more you release, the more you actually know what the variability between your environments are. This also allows you to remove unnecessary configurability.
A corollary to conway's law applies here - your config files will resemble the variety of environments your app is deployed to (planned or actual).
When I have a team deploying internal applications, I tend to drive towards config in code for all architectural concerns (connection pools, etc), and config in files for all environmental config (usernames, connection strings, ip addresses). If there different architectural concerns across different environments, then I'll encapsulate those into one class, and make that classname part of the config files - e.g.
container.config=FastInMemoryConfigurationForTesting
container.config=ProductionSizedConfiguration
Each one of these will use some common configuration, but will override/replace those parts of the architecture that need replacing.
This is not always appropriate however. There are several things that will affect your choice:
1) how long it takes after releasing a new drop before it is deployed successfully in each production environment and you receive feedback on that environment (cycle time)
2) The variability in deployed environments
3) The accuracy of feedback garnered from the production environments.
So, when you have a customer who distributes your app to their dev teams for deployment, you are going to have to make your app much more configurable than if you push it live yourself. You could still rely on config in code, but that requires the target audience to understand your code. If you use a common configuration approach (e.g. Spring), you make it easier for the end users to adapt and workaround issues in their production.
But a rubric is: configurability is a substitute for communication.
XML is not meant to have logic, and it's far from being a programming language.
XML is used to store DATA in a way easy to understand and modify.
Has you say, it's often used to store definitions, not business logic.
You mentioned Spring in a comment to your question, so that suggests you may be interested in the fact that Spring 3 lets you express your application contexts in Java rather XML.
It's a bit of a brain-bender, but the definition of your beans, and their inter-dependencies, can be done in Java. It still keeps a clean separation between configuration and logic, but the line becomes that bit more blurred.
XML is mostly a data (noun) format. Code is mostly a processing (verb) format. From the design perspective, it makes sense to have your configuration in XML if it's mostly nouns (addresses, value settings, etc) and code if it's mostly verbs (processing flags, handler configurations, etc).
Its bad because it makes testing harder.
If you're writing code and using methods like getApplicationContext() to obtain the dependencies, you're throwing away some of the benefits of dependency injection.
When your objects and services don't need to know how to create or acquire the resources on which they depend, they're more loosely coupled to those dependencies.
Loose coupling means easier unit testing. Its hard to get something into a junit if you need to instantiate all its dependencies. When a class omits assumptions about its dependencies, its easy to use mock objects in place of real ones for the purpose of testing.
Also, if you can resist the urge to use getApplicationContext() and other code based DI techniques, then you can (sometimes) rely on spring autowiring which means means even less configuration work. Configuration work whether in code or in XML is tedious, right?
Suppose I want to implement an application container. Not a full-on Java EE stack, but I need to provide access to JDBC resources and transactions to third party code that will be deployed in an application I'm writing.
Suppose, further, that I'm looking at JBossTS for transactions. I'm not settled on it, but it seems to be the best fit for what I need to do, as far as I can tell.
How do I integrate support for providing connection resources and JTA transactions into my Java SE application?
How do I integrate support for
providing connection resources and JTA
transactions into my J2SE application?
Hi Chris
There are two elements to this problem:
1) Making the JTA API, mainly UserTransaction, available to application code, so it can start and end transactions. In A Java EE environment it’s published into a well known location in JNDI. If you have a JNDI implementation that’s the way to go (Use JBossTS’ JNDIManager class to help you with the setup). Otherwise, you need some kind of factory object or injection mechanism. Of course you can also expose the implementation class direct to the end user, but that’s somewhat nasty as it limits any chance of swapping out the JTA in the future.
public javax.transaction.UserTransaction getUserTransaction() {
return new com.arjuna.ats.internal.jta.transaction.UserTransactionImple();
}
That’s it – you can now begin, commit and rollback transactions. Some containers also publish the TransactionManager class to applications in similar fashion, but it’s really designed for use by the container itself and rarely needed by the application code.
2) Managing enlistment of XAResources automatically. Resource managers i.e. databases and message queues, have drivers that implement XAResource. Each time the application gets a connection to the resource manager, a corresponding XAResource needs to be handed off to the JTA implementation so it can drive the resource manager as part of the 2PC. Most app servers come with a JCA that handles this automatically. In environments without one, you need some alternative to save the application code from having to do this tedious task by hand. The TransactionalDriver bundled with JBossTS handles this for JDBC connections. XAPool may also be worth considering.
JBossTS has been embedded in many environments over the years. Some of the lessons learned are documented in the Integration Guide http://anonsvn.jboss.org/repos/labs/labs/jbosstm/trunk/atsintegration/docs/ ] and if you want a worked example you could look at the tomcat integration work http://anonsvn.jboss.org/repos/labs/labs/jbosstm/workspace/jhalliday/tomcat-integration/ ]
JBoss's TM is horrible. At least, if
you are hoping for ACID transactions.
Hi erickson
I don’t think I’d go quite as far as ‘horrible’. It’s incredibly powerful and highly configurable, which can make the out of box experience a bit daunting for newcomers. Correct recovery configuration is particularly tricky, so I fully endorse your comment about rigorous testing. Beyond that I’m not aware of any documented test cases in which it currently fails to provide ACID outcomes when used with spec compliant resource managers. If you have such as case, or just more constructive suggestions for improvement, please let JBoss know so the issue can be addressed.
Don't reinvent the wheel. Use the
Spring Framework. It already provides
this functionality and much more.
-1 Spring does not provide a JTA implementation, just a wrapper for various 3rd party ones. This is a common misunderstanding.
JTA supports local transactions and
global transactions.
Another misconception I’m afraid. The JTA spec deals only with XA i.e. global transactions. Various well known techniques exist for making a JTA transaction manager drive local transactions. These usually involve wrapping the Connection in an XAResource. Whilst most implementations support this it’s actually outside the scope of the spec and so you must check with the vendor before choosing a JTA implementation if you need this behaviour.
Try Atomikos TransactionsEssentials.
Unlike competing open source JTA/XA implementations, this one was written from the start for JSE. Consequently, it offers premium JDBC and JMS pools as well as JTA/XA functionality, and you will find it very easy to integrate into your applications.
Best
Guy
Don't reinvent the wheel. Use the Spring Framework. It already provides this functionality and much more.
You can use Spring, much as I'm not that keen.
An example of what you might want is here
JTA supports local transactions and global transactions.
Local transactions can be easily handled by Spring, JPA or even manual commits on connections.
Global transactions REQUIRE a transaction coordinator. That's a separate product/library which is not readily available in open source (or at least I'm not aware of it).
So, if I look onto the header of your post ("JTA"), the answer is NO SIMPLE WAY.
If I read your posting itself ("provide access to JDBC resources and transactions"), I'd say Spring, JPA and Hibernate all would cover your needs (as I understood them).
P.S. Correction: JTA doesn't really support local transactions (as people have pointed me out), but a case when you only need a single connection is essentially a local transaction, even if controlled by JTA, especially when Transaction Manager is located in the same JVM (as it often happens).
"JBoss's TM is horrible. At least, if you are hoping for ACID transactions. The best that one can say about it is that it will probably not screw up as long as it doesn't have to contend with any failures. And it's not alone... most transaction managers (even some commercial ones) really don't work."
Not sure what homework you did to make the above statement, but JBossTS (the TM in JBoss since 2006, when it was acquired) does provide full ACID semantics. It was also originally part of the HP NetAction suite, where it was deployed in more mission critical applications than any of the other open source TMs.
I've elected to use the Bitronix Transaction Manager to solve this problem, although apparently there's at least one other option that wasn't apparent to me at the time (Atomikos).
Solving this ended up requiring me to use the Tomcat in-process JNDI provider as well, in order to associate the transaction with a JNDI name. Due to a limitation of that provider, I could not use the default name for a JTA UserTransaction, which isn't immediately apparent from the documentation.
Thanks to all for the helpful answers anyhow!