I am going to create an application using jsf 2.x, glassfish 3.1 open source, JPA + postgresql . I want to develop it in such a way, that my app can be clustered on several physical servers and load balanced.
What are the recommended free and open source technologies for clustering and load balancing a jsf 2.0 web application?
What are the best approaches and what should I keep in mind before planning and designing my application?
Any other useful information related to this question is also appreciated )).
Thanks in advance.
Glassfish application server has a built-in cluster support. You have to run your application on multiple glassfish instances and configure the server to replicate the data to other server (bind the servers in a cluster).
To enable replication for your application you should put the following tag in web.xml
<distributable />
When the cluster is set up properly the http sessions will be replicated among the cluster nodes. What's left is to configure a load balanced like Apache httpd that will accept requests and route them to a specific server in a cluster.
In general - avoid storing data in the session as much as possible. Make your beans serializable with scope with longer life than request.
Look in google for more information.
Related
I want to design an application using AWS as IAAS, Docker as PAAS and Spring Boot and Spring cloud as application technology.
For this, I googled and read a lot of blogs and watch videos but could not find any answer for that.
I developed one application using Spring Boot and Spring cloud technology, and the application architecture looks like below image.
This design looks good and working fine as per expectation.
Now the new task is, I need to use the cloud (AWS) as Infrastructure and Docker.
For that, I designed one more architecture, and it looks like below image.
The component as follows:
ELB - (Elastic Load Balancer) -> Target Group (Part of Auto Scaling) -> EC2 instance (will be created more on demand)
Now if I want to integrate my previous design then I think there is not need of Zuul server here because this load balancing is done by ELB, the second I do not need Service Discovery component as well because it will be done by Target AutoScale group.
I am a little bit confused here with Spring Cloud and AWS infrastructure.
Could someone help me to make really simple how I can integrate these components to work together?
Thanks
Why Spring Cloud with AWS ?
Let's take example when you need Spring Cloud even if your architecture is deployed on AWS infra :
Imagine your Product service need to communicate with your Order Service, in this case you will see Spring Cloud utility.
You don't see the necessity of Spring Cloud because you don't have an internal communication (between your services) and this is the role of Registry service.
Why Gateway service (Zuul in your architecture) ?
Again, your current architecture don't use (need) the powerful of Gateway pattern.
Let's assume your system need to aggregate multiple results from different services to response to client request. You can do this in Gateway (Zuul in your case).
Another advantage to use Gateway service is you can use it as a unified front door to your system, which allows a browser, mobile app or other user interface to consume services from multiple hosts without managing cross-origin resource sharing (CORS) and authentication for each one.
Important :
It's fine to not use Spring Cloud, is not a rule or THE right way to implement microservices architecture. If you don't need it don't use it.
I will developp and host an e-commerce website based on Asp.Net MVC4 (with several SQL Server Jobs).
I think use Azure in order to stay in Microsoft's world and avoid dedicated server management.
The package Web Site Shared with 1 site / 5Go SQL Server Database / 200Go Bandwidth is very interesting with the price based on 12 months.
But i don't know if this configuration is enough specially on the bandwidth.
What do you think of ? Did you use Azure with this type of application ?
Regards,
Guillaume.
If you want to develop E-Commerce application you will have to secure customers' sensitive data i.e. credit cards, address details etc. via secure connections (HTTPS; in many countries this is legal requirement). For that reason you will have to have SSL support.
Azure Website do not support SSL for custom domains. However, they support SSL for *.azurewebsites.net DNS name. So if your E-Commerce application DNS will be, say, my-ecom-app.azurewebsites.net then it's fine. Otherwise, I would not recommend Azure Website solution yet (I am sure SSL support for custom domains on Azure Website will be implemented).
Azure Cloud Services, on the other had, have full support of SSL for custom domains.
One of the really good websites to check Azure features and development roadmap is ScottGu's Blog
Azure Web Sites do not support SSL and I really don't know of any successful e-commerce site that does not run SSL for at least part of the website. If you really want to host your e-commerce on Azure today your only real choice is to run Virtual Machines for your web front end servers and use them for your DB or use SQL Azure.
We developed platform called Virto Commerce that does just that, MVC4 website hosted on Azure. There was also a need for SQL Jobs (indexing, payment processing, cart cleanups and so on) for which we used WorkerRole (instead of WebRole). WorkerRole and WebRole can actually be combined as part of a single deployment, however it is better to use a different instance for worker roles. In our case WorkerRole acted as a scheduler for multiple jobs defined in the database.
The challenge with WorkerRoles however is to make sure they scale well when new instances are added. So the workload needs to be distributed between multiple instances. This is done through the use of queues and blob locks, where each job is now split into two, one that schedules and partitions the work and the second that actually picks up the next partition and completes it.
Hope this helps!
PS: Virto Commerce is now available as an open source project on codeplex, go to http://virtocommerce.codeplex.com
We run a number of ASP.NET MVC sites - one site per client instance with about 50 on a server. Each site has its own configuration/database/etc.
Each site might be running on a slightly different version of our application depending on where they are in our maintenance schedule.
At the moment for background processing we are using Quartz.net which runs in the app domain of the website. This works well mostly but obviously suffers issues like it isn't running when the appdomain shuts down such as after prolonged activity.
What are our options for creating a more robust solution?
Windows Services are being discussed but I don't know how we can achieve the same multi-site on different versions we get within IIS.
I really want each IIS site to have its own background processing which always runs like a service but is isolated in the same way an IIS site is.
You could install a windows service for each client. That seems like a maintenance headache though.
You could also write a single service that understands all versions of your app, then processes each client one after the other.
You could also create sql server jobs to do this, and setup one job for each customer.
I assume that you have multiple databases for each client as you mentioned.
Create a custom config file with db connection-strings for each client.
e.g.
<?xml version="1.0" encoding="utf-8" ?>
<MyClients>
<MyClient name="Cleint1">
<Connection key="Client1" value="data source=<database>;initial catalog=client1DB;persist security info=True;user id=xxxx;password=xxxx;MultipleActiveResultSets=True;App=EntityFramework" />
</MyClient>
<Myclient name="client2".....></MyClient>
</MyClients>
Write a windows service which loads the above config file (clients|connectiostring) into the memory and iterate through each client database config. I guess all clients have similar job queuing infrastructure, so that you can execute same logic against each database.
You can use a Mutex to initiate the windows service for a client so that it will make sure two instances of the service for the same client won't run at the same time.
I am looking for a solution that allows me to deploy multiple load balanced Grails instances that have shared cache (EhCache Server ?) and sessions, is this possible ?
I can't find any documentation on this (connecting to a common EhCache server or using Distributed EhCache, and sharing sessions (using EhCache too ?))...
I'm looking for something that will work like multiple Rails instances with a common memcached and sessions/caches stored in the memcached...
I was recently listening to a talk by Dave Klein, author of the book "Grails: A Quick Start Guide", and Mike Allen, Product Managemer at Terracotta about Clustering and Scaling Grails applications.
They introduced Terracotta as a great tool for exactly solving your issues and showed needed steps to share session within multiple instances of your Grails applicatons.
So if you want to go with Terracotta the Grails Terracotta Plugin might be very useful for you.
EDIT
On the EHCache website you will now find a short introduction to using EHCache with Grails.
I have two separate installs of WebSphere. (Actually one is WebSphere Application Server V6.1 with EJB 3.0 and Web Services feature packs, and the other server is WebSphere ESB Server V6.2). However, I know that ESB is really built on top of WAS, so it has all the configuration settings that a regualr WAS server has.
In my ESB server, I am trying to expose a service written as EJB 3.0 that will be deployed to the WAS 6.1 server. My question is not how to get EJB 2.1 calls to call into an EJB 3.0. We've done that already. My question is how to call across physical VM's. The WebSphere Application Server is running in its own cell/node/server from the ESB Server. From what I've read in IBM documentation, it is possible to set up a namespace binding on WAS to point to a remote EJB on another WAS instance. Thus you could use JNDI to lookup a bean on one WAS instance that really resides in another WAS instance. The beauty of this method is the location of the EJB you want is abstracted to the container level, and you don't have to drag around properties files of the IP addresses and ports that you need to access the bean should it change servers, etc. You just make a standard JNDI lookup to a remote EJB and you get it.
Sounds like it can be done. (See the following links:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/com.ibm.websphere.express.doc/info/exp/ae/tnam_view_bindings.html) Especially follow the links on EJB and Indirect namespace bindings.
But I've been hitting my head against this for a while. It makes sense. It looks like it can be done. And the Indirect namespace binding looks the most promising. But I can't get it to work quite right. My ESB server keeps complaining about not finding comp/env/ejb in the context in which I am asking for it. Very puzzled by this one.
Just wondering if anybody has done this kind of thing before. Can you give me a concrete example of how you set this up in WAS to do so? Any help is appreciated
Well, I have since talked with IBM on how to do this and was surprised by their answer. They answered that if you are talking EJB to EJB within the same server or server cluster, then use EJB RMI via IIOP. With JNDI this abstracts where the bean is actually running (in a clustered environment).
If you are going from one server (or server cluster) across into a different server (or server cluster) regardless of whether or not the target and source are in the same cell, IBM recommended that you use messaging or web services. They felt that was a better method of abstraction between applications to keep them from being "tied" to each other. They did say that you could get EJB's to talk RMI via CORBA, but said to do that ONLY if absolutely necessary. And of course, you would need to know the IP and port number for coming in over CORBA (and that times each cluster member if in a clustered environment).
Again, this kind of surprised me, but it does make sense. Just thought I'd share these thoughts with the world, especially if you are working with WebSphere.
how to lookup from tomcat
use IBM JDK as runtime for tomcat
find bootstab port , use iiop in PROVIDER_URL
I was stuck with the same problem. After trying to include all the websphere and ibm orb jars found this article at ibm
How to lookup an EJB and other Resources in WebSphere Application Server using a Oracle JDK client - http://www-01.ibm.com/support/docview.wss?uid=swg21382740
basically used the CNCtxFactory instead of WsnInitialContextFactory
//props.put(Context.INITIAL_CONTEXT_FACTORY,"com.ibm.websphere.naming.WsnInitialContextFactory");
Hashtable env = new Hashtable();
env.put("java.naming.factory.initial", "com.sun.jndi.cosnaming.CNCtxFactory");
env.put("java.naming.provider.url", iioppath);