I did a web search on how to scale CherryPy server and didn't find much information. I was wondering if there is a guideline on this subject. We are planning to run two CherryPy instances for a consumer facing application. The backend caching and the static files cachine are already handled, we just need to handle a large number of simple GET requests.
how do we scale the front-end?
By default cherrypy server.thread_pool is 10. When I increase it to 50 or 100 and run my
load testing against it and it seems to freeze the server. Most resources I found are using some number between 30-50.
Other techniques
for scaling to thousands of users at
the same time?
Thanks!
Here are two sites you should take a look at...
http://yougov.github.com/pycon/slides/
http://www.readmespot.com/question/f/100994/cherrypy--load-balancing--and-remote-failover
Related
I am building a website with rails on AWS and I am trying to determine the best ways to stress-test while also getting some idea of the cost I will be paying by user (very roughly). I have looked at tools like Selerium and I am curious if I could do something similar with Postman.
My objects are:
Observe what kind of load the server would be under during the test, how the cpu and memory are affected.
See how the load generated would affect the cpu cycles on the system that would generate cost to me by AWS.
Through Postman I can easily generate REST calls to my rails server and simulate user interaction, If I created some kind of multithreaded application that would make many calls like to the server, would that be an efficient way to measure these objectives?
If not, is there a tool that would help me either either (or both) of these objectives?
thanks,
You can use BlazeMeter to do the load test.
This AWS blog post show you how you can do it.
I'm aware of the hugely trafficked sites built in Django or Ruby On Rails. I'm considering one of these frameworks for an application that will be deployed on ONE box and used internally by a company. I'm a noob and I'm wondering how may concurrent users I can support with a response time of under 2 seconds.
Example box spec: Core i5, 8Gb Ram 2.3Ghz. Apache webserver. Postgres DB.
App overview: Simple CRUD operations. Small models of 10-20 fields probably <1K data per record. Relational database schema, around 20 tables.
Example usage scenario: 100 users making a CRUD request every 5 seconds (=20 requests per second). At the same time 2 users uploading a video and one background search process running to identify potentially related data entries.
1) Would a video upload process run outside the GIL once an initial request set it off uploading?
2) For the above system built in Django with the full stack deployed on the box described above, with the usage figures above, should I expect response times <2s? If so what would you estimate my maximum hits per second could be for response time <2s?
3) For the the same scenario with Ruby On Rails, could I expect response times of <2s?
4) Specifically with regards to response times at the above usage levels, would it be significantly better built in Java (play framework) due to JVM support for concurrent processing.
Many thanks
Duncan
I've been running a Rails app on 1 big dedicated server. Now for scaling I want to switch to a cloud service hoster and serve the app on 3 instances - App, DB and Redis.
I have really bad experience with Heroku performance wise and hence cost efficiency. So for me 2 Alternatives remain: Engineyard and Enterprise-Rails.
What I find important is that Engineyard doesn't offer an autoscaling option to handle peaks. On the other hand Enterprise-Rails doesn't have too much of documentation, most of it is handled by a support crew which is setting up everything.
What are other differences and what should I use for my website? I don't need much of administration work and I am not experienced with it. Basically I just want my Site to run optimally safe, stable and cost efficient without much personal work involved.
I am running a massive Rails app off AWS at this time and I'm really happy with it. Previously I had a number of dedicated boxes that were always causing problems - sooner or later one of them would crash for some reason, Raid failures, database problems whatnot.
At AWS I use RDS for database, elastic cache for caching, I keep all my code on a fat instance that acts as staging server and get a variable number of reserved instances to load the code via NFS.
I also use autoscaling - we've prepaid for a number of reserved instances and autoscaling helps starting up nodes when CPU usage goes above 60%, then removing them when it goes below 25%. autoscaling rules are based on cloudwatch alerts that can be set to monitor a particular group of instances, memcache servers, and so on, you even get e-mails and SMS notifications via SNS when certain scaling activities take place, say when more than 100 instances are spammed in less than 1 hour (massive traffic spike). The instances also get added right up to the load balancers by the way and you don't need to mess with the session store as you can use the sticky session feature which is quite nice.
Recently I also started using a 2nd launch group with spot instances, this complicated things a bit in terms of cloudwatch rules but I'm able to save a lot every month as spot prices are much lower. When the spot price (minimum) I bid is not enough, the set-up I have switches back to reserved instances.
Even more recently I've also started using CloudFront which got my app's page assets to load real fast (about 2 megs of CSS, JS, some icon sprites). Previously I was serving directly from instances via the load balancers.
This took about 20 hours to deploy, test and tune for maximum performance and availability.
One of the problems I have with AWS is that there's no support unless you're prepared to foot a bill. They claim some support is offered without a subscription but the only option in the support area is Billing. Ha. Fortunately it's all stable enough not to put me in a position where I'd have to pay for it.
Overall Rails fits in quite nice with AWS. I spend less than 2 hours per month doing maintenance, where I was spending over 30 previously. Most important for me is that I know that I can GTFO on a vacation for X months knowing nothing will cause any trouble - haven't had a monitoring alert more than a year.
Later edit: the app is a sports site with white labeling feature, lots of users, lots of administrators working on content in the back-end, database intensive as we show market pricing data that should update every few seconds. I had an average load time of about 3 seconds per page with dedicated servers that were doing about the same thing - database, memcache, storage, load balancing, web app. Now my average is under 1 second. Monthly bill is about 8 times lower now.
While Engine Yard doesn't offer auto-scaling (it is in the pipeline), we do have a fairly easy to use scaling feature that allows you to spin up multiple instances at once in times of need.
The advantages over something like Enterprise-Rails is the full documentation, the choice to deploy from the CLI or the dashboard,and our amazing support team. It's also easier to use Engine Yard and move from a personal machine or from another cloud setup than it is using a service such as AWS directly.
I am working on this personal project of mine which involves an iOS app which retrieves data (an integer value) from the server at my house daily. Right now I have a single server which can handle only about 100-120 concurrent requests per second. However, I want to scale my setup to be able to handle as many as 70-80K requests concurrently while using minimum servers as my budget is limited. Could anyone suggest some techniques for this?
I have a Rails 3.2 application hosted by Heroku. I am based in the UK and most of the application's users are based in Europe. Currently the performance leaves a lot to be desired, given that the app is located in the American region.
Is there anything I can do to improve performance for European users whilst still hosting with Heroku, or do I need to look for an alternative host based in the European region?
Edit: The problem is not the app. It's the delay between requesting a page and the request hitting the app.
As of yesterday (2013-04-24) The simple answer to this question is heroku create --region eu
Read more on the blog: https://blog.heroku.com/archives/2013/4/24/europe-region
To migrate an existing app: https://devcenter.heroku.com/articles/app-migration#fork-application
If you're on the free stack then use this tip to prevent your dyno from idling and spinning down (and having to spin back up):
Easy way to prevent Heroku idling?
Find out which pages are having the most negative impact on performance and see if you can use any of these caching strategies to lighten the load:
https://devcenter.heroku.com/articles/caching-strategies
Identify processes that could be run in the background so you can return requests faster. Resque and Redis for that:
https://devcenter.heroku.com/articles/queuing-ruby-resque
Have you looked at:
reduce page size / number of objects - doesn't matter how fast you make the site, if the pages are huge
add a CDN - moving static content closer to the user dramatically improves the performance for those assets (how much it impacts your site, depends on how much static data you have)
various other performance tweaks from yslow or similar
anycast DNS - a decent DNS service can make a minor but noticeable difference
I spent some time optimising my site on Octopress/Heroku, and wrote it up (http://www.damon.io/blog/2013/01/10/improving-octopress-jekyll-site-performance-DNS-hosting-CDN/) there's some comparisons for some CDNs and DNS which you may find interesting.
Of course, how relevant this is to your site, I can't say, but hopefully there's something useful in here for you.