I'm currently working on a project where we migrate a web application, which uses the ZK-framework, to a eucalyptus cloud enviroment, but we wonder how we can make the framework scalable. Is it even scalable?
Thanks in advance.
Sure, ZK can be scaled well. One of ZK's clients is building an application targeting 20 millions users. The application passed the stress test a couple months ago.
Like JSF or other server-side solutions, ZK has to hold the states of UI at the servers (unless you take the pure-client approach). It means you have to make the states serializable if you'd like to support failover. You could refer to http://books.zkoss.org/wiki/ZK_Developer%27s_Reference/Clustering for more information.
On the other hand, the access of UI states in one browser window won't block the acess of another browser window. The access is done in fully parallel. The bottleneck, from our consulting experiences, is usually from the backend services rather than UI. Anyway, depending on your targeting scale and the application's complexity, there are several architectural approaches, such as using a load-balance dispatcher in front of UI layer, running UI layer in a separated server, etc.
I am not familiar with eucalyptus, so not sure if anything worth to notice.
Related
My startup is building an online/mobile labor marketplace where there will be a business interface for businesses posting jobs and we distribute these jobs through a mobile interface for users. We use Rails, REST, Amazon RDS & EC2 and mysql.
My question is: from an architecture standpoint on the server-side does it make sense to?:
a) Have 2 applications one serving the web interface and one acting as the server-side (API) for the mobile interface and both communicating via the DB and via 2 different EC2 instances
b) Try to build one comprehensive application serving both interfaces
Any opinion and perspective on pros and cons would be much appreciated
Thanks
Amazon Web Services are giving you a few tools to make your architecture simpler, more robust and scalable, if you break down your system into smaller, simpler and independent components.
Since you have a set different sizes of instances from micro to extra large (and a few sizes of extra large), you have the flexibility to mix each service type to the appropriate size, configuration, software dependencies, update cycle etc. It will make the life of the developers, testers and administrators much easier.
You also have the ability to scale and even auto-scale each layer and service independently. If you have more users of one of the interfaces, or the other, or increase in data size that is fed through one of the interfaces, you can scale only the relevant service. It will save you the complexity and cost of scaling the full system as a whole.
Another characteristic of AWS is the ability to scale up, down, out and in based on your need. For example, if one of the interface has more users on working weekdays and fewer on weekends, you can scale down this interface for the weekend or night, saving 50% of your computation costs for this instances. In this regards, you can switch from the on-demand model for the more static interfaces to reserved instances, saving again 50% of your costs.
To allow communication between the different interfaces you can use your DB, but you also have a few more options based on your use case. One option is to use Queueing system as SQS. The queue is used as a buffer between the interfaces, reducing the risk of failure of one component (software bug, hardware failure...), affecting the whole system. Another option for inter interfaces communication, that is more tuned to performance is to use in-memory cache. AWS are offering ElasticCache as such service. It can be more efficient than updating such transient data for short period of time and with high load.
I see no long term cons, other than the quick (and dirty) implementation of a single service on a single machine.
The overall goal should be to have most amount of code common between the mobile and web application. Otherwise you are looking at maintenance headaches where you will end up e.g. fixing bugs, or adding features at two places at minimum.
Ideally everything below the front end should be common. The web UI should itself be calling the server side APIs that you are going to need for mobile application. And most amount of logic should be put into these APIs, leaving only presentation details to the UI. This is as prescribed by many patterns such as MVC.
I have a mobile site and desktop site running myself. The codebase is exactly the same at PHP level. Only the smarty templates are different between the two. Granted that mobile application is different from mobile site, but the basic principles still apply.
I am beginning work on an individual project to bring an existing product out of the dark ages of classic ASP and into the light. My biggest decision to make before embarking on this lengthy journey is determining what frameworks and methodology I will implement for the new design.
Right now I am looking at MVC or MVVM (from what I gather this is just Silverlight?) for the web interface, Entity Framework or something I write myself as the model and MSSQL as the data.
Unfortunately I am just a fledgling programmer and I am not particularly aware of trends in the world of programming in general. I don't know what is just a passing fad and what technologies actually have lasting potential. I would really like to use something that is likely to remain relevant for some time. So I am looking to the professionals here for input on ideas that worked for you, pitfalls to watch out for and things to keep an eye on.
I appreciate any and all suggestions, keeping in mind that using the Microsoft and .Net is something of a prerequisite. I really want to make sure I am headed in the right direction before I start as this will probably take several months.
As for frameworks I personally suggest:
ASP.NET MVC 3 of MVC 4, depending on the question if beta software is allowed.
Entity Framework 4.3 or 5.0. 5.0 is a lot faster (is has auto compilation) but it's still a Release Candidate.
AutoMapper to map between Entities and ViewModels.
Ninject for dependency injection (useful if you want to write unit tests).
JQuery for stuff like clientside validation (integrates perfectly with ASP.NET MVC).
Possible some CSS framework like Bootstrap.
Maybe RestSharp so you can easily perform requests.
In case it's a cloud service (most SaaS are) and you'd like to host it on Azure (brilliant integration with the .NET stack) you'll need the Azure SDK.
As for software achitecture:
Use service layers
Use the repository patterns
Use ViewModels to pass to your view instead of entities
Set up a dependency injection container
That's my advice, I personally find this a golden combination for building enterprise applications (while not wasting too much time configuring lots of things).
Pitfalls:
I don't know if unit testing is really necessary. I should definately keep it in mind while setting up the architecture, but I personally choose to do that later because I don't even know if my product will succeed, so I can better put my time in building a fast Minimal Viable Product.
Don't assume anything. You can waste months of your precious time working on a cool feature that you think everyone will like, but often this is not the case. Do just the absolutely required minimum, and improve it later if your users like it.
I will add more to #Leon suggestions as I see those suggestion are great from application framework perspective, while I wanted to write here from cloud methodology perspective.
As you have chosen SaaS, definitely you are moving completely in Cloud while bring your application and data to cloud all together, that's great!!
There are several layers to any cloud application and to understand lets see what a cloud service stack look like. If we take an example of Windows Azure:
You have Compute, where your application runs with a web server (or not).
You have Azure table store which you can use to store key value pair in a row and then access them very fast.
You have Azure Queue allows decoupling of different parts of a cloud application, enabling cloud applications to be easily built with different technologies and easily scale with traffic needs.
You have Access Control Services to authenticate users through OpenID or AD
You have service bus to connect other services in cloud or on-premise at 3rd party.
You have Azure Blob storage to use as web based flat file server
You have Azure Cache (an in-memory cache build to scale in cloud)
You have SQL Azure as you cloud database
There are many more services which you can explorer and use
So when you decide to move your application from traditional web hosting to cloud you really have to look about how to take advantage of these different cloud services to scale your application when needed and save you lots of money.
With you application in Cloud you try something as below:
Keep you application logic as small as possible
Keep your static content outside the compute
Use cloud based cache for fast access as application scale out
Move data out of traditional RDBMS databases to NoSQL Framework (key-value pair, document etc to save money and flexibility), if possible and applicable
Take advantage of other available services to reduce application complexity
If you consider above aspect in your mind you will create a true cloud based application which will be fast and will save you money.
I want to know what is the difference between these two implementations of neo4j. Of-course names of both techniques is self-explanatory,but still what are the main differences?
What factors should be considered in deciding which technique to use in the project?
Pros and cons.
P.S. Sorry if it is a repeat question but I searched and was not able to find any ques which answers my question.
Because the standalone server is built on the embedded server, the general rule of thumb is that the embedded server is more capable and has (obviously) lower latency. Either can operate in High-Availability mode, allow monitoring, and even accept connections from the neo4j-shell. With the server though, you get more functionality out-of-the-box, like remoting, basic visualization, monitoring interface, etc.
The differences are otherwise the practical ones you'd imagine. Choosing a deployment approach is influenced by two things:
Language - embedded mode requires that you're implementing your application with a JVM compatible language. The server supports any language/framework that can send HTTP requests.
Hardware - sharing physical resources between your application and Neo4j can be demanding. Scaling may argue for a dedicated machine to split out the persistence layer. The server obviously has a remote API to support segmenting your application.
It's otherwise difficult to give guidance without a specific usage scenario. Deploying into an existing Service Oriented Architecture? Probably server. Running on an copier machine? Go embedded. From scratch web application? What's the rest of your stack?
How can i write a cloud-aware application? e.g. an application that takes benefit of being deployed on cloud. Is it same as an application that runs or a vps/dedicated server? if not then what are the differences? are there any design changes? What are the procedures that i need to take if i am to migrate an application to cloud-aware?
Also i am about to implement a web application idea which would need features like security, performance, caching, and more importantly free. I have been comparing some frameworks and found that django has least RAM/CPU usage and works great in prefork+threaded mode, but i have also read that django based sites stop to respond with huge load of connections. Other frameworks that i have seen/know are Zend, CakePHP, Lithium/Cake3, CodeIgnitor, Symfony, Ruby on Rails....
So i would leave this to your opinion as well, suggest me a good free framework based on my needs.
Finally thanks for reading the essay ;)
I feel a matrix moment coming on... "what is the cloud? The cloud is all around us, a prison for your program..." (what? the FAQ said bring your sense of humour...)
Ok so seriously, what is the cloud? It depends on the implementation but usual features include scalable computing resource and a charge per cpu-hour, storage area etc. So yes, it is a bit like developing on your VPS/a normal server.
As I understand it, Google App Engine allows you to consume as much as you want. The back-end resource management is done by Google and billed to you and you pay for what you use. I believe there's even a free threshold.
Amazon EC2 exposes an API that actually allows you to add virtual machine instances (someone correct me please if I'm wrong) having pre-configured them, deploy another instance of your web app, talk between private IP ranges if you wish (slicehost definitely allow this). As such, EC2 can allow you to act like a giant load balancer on the front-end passing work off to a whole number of VMs on the back end, or expose all that publicly, take your pick. I'm not sure on the exact detail because I didn't build the system but that's how I understand it.
I have a feeling (but I know least about Azure) that on Azure, resource management is done automatically, for you, by Microsoft, based on what your app uses.
So, in summary, the cloud is different things depending on which particular cloud you choose. EC2 seems to expose an API for managing resource, GAE and Azure appear to be environments which grow and shrink in the background based on your use.
Note: I am aware there are certain constraints developing in GAE, particularly with Java. In a minute, I'll edit in another thread where someone made an excellent comment on one of my posts to this effect.
Edit as promised, see this thread: Cloud Agnostic Architecture?
As for a choice of framework, it really doesn't matter as far as I'm concerned. If you are planning on deploying to one of these platforms you might want to check framework/language availability. I personally have just started Django and love it, having learnt python a while ago, so, in my totally unbiased opinion, use Django. Other developers will probably recommend other things, based on their preferences. What do you know? What are you most comfortable with? What do you like the most? I'd go with that. I chose Django purely because I'm not such a big fan of PHP, I like Python and I was comfortable with the framework when I initially played around with it.
Edit: So how do you write cloud-aware code? You design your software in such a way it fits on one of these architectures. Again, see the cloud-agnostic thread for some really good discussion on ways of doing this. For example, you might talk to some services on GAE which scale. That they are on GAE (example) doesn't really matter, you use loose coupling ideas. In essence, this is just a step up from the web service idea.
Also, another feature of the cloud I forgot to mention is the idea of CDN's being provided for you - some cloud implementations might move your data around the globe to make it more efficient to serve, or just because that's where they've got space. If that's an issue, don't use the cloud.
I cannot answer your question - I'm not experienced in such projects - but I can tell you one thing... both CakePHP and CodeIgniter are designed for PHP4 - in other words: for really old technology. And it seems nothing is going to change in their case. Symfony (especially 2.0 version which is still in heavy beta) is worth considering, but as I said on the very beginning - I can not support this with my own experience.
For designing applications for deployment for the cloud, the main thing to consider if recoverability. If your server is terminated, you may lose all of your data. If you're deploying on Amazon, I'd recommend putting all data that you need persisted onto an Elastic Block Storage (EBS) device. This would be data like user generated content/files, the database files and logs. I also use the EBS snapshot on a 5 day rotation so that's backed up itself. That said, I've had a cloud server up on AWS for over a year without any issues.
As for frameworks, I'm giving Grails a try at the minute and I'm quite enjoying it. Built to be syntactically similar to Rails but runs on the JVM. It means you can take advantage of all the Java goodness, like threading, concurrency and all the great libraries out there to build your web application.
I need to build a web application with different process flows and different UI steps depending on the locale of the logged in user.
I have developed a number of ASP.NET applications in C# and like the separation of concerns an MVC approach would give me. So I am looking at using these technologies.
The kicker is that different users in different locales need to have very different experiences, despite accessing the same datasource. I'm also constrained by the requirement to have new process flows be able to be configured easily. The XAML based Windows Workflow Foundation looks like a good candidate, and would allow me to avoid developing my own process flow engine.
However, I am a little concerned about the performance implications of such an approach. Has anyone tried this sort of architecture? What sort of impact can I expect on request time, CPU utilization and memory consumption?
All opinions gratefully received, Thanks.
Howdo
The principle objection, and I suppose, performance risk, which although not specically related to your ask, is that windows workflow hasn't been designed for that type of scenario. Simply put, WF first design tenet, was to enable long running transactions, built around web service and SOA, i.e. transactional connectivity lasting days, and as such its runtime is optimised for it. It doesn't really make a good fit, especially when you will need to shoe horn it into working, and it will be much more resource hungry for short running workflows, i.e. in relation to state changes, caching, etc, in terms of comparable other process flow code/engines. If you do go ahead, get a pilot up and running and test to death using Mercury Load Runner.
scope_creep