I run an Ubuntu 8.04 shared host (VMWare) with Apache + Passenger (= Mod Rails), MySQL and Acts_As_Ferret (in server mode). It's too slow at the first requests. I do a lot of REST operations on it and have very few users.
Now I want to do a fresh installation...
Which setup (based on Ubuntu) do you recommend for a really snappy RoR server? (e.g. Ngnix, Thin, Mongrels or other fancy stuff)
Passenger is slow at first requests because it is idling and it shuts down all the rails processes so the first request has to load a rails process. You need to either ping regularly to avoid it idling and closing rails processes or set the idle timeout to a high value.
Look in the documentation for RailsPoolIdleTime
Check the ec2onrails mailing list, where there has been a lot of discussion of the various thin/nginx/passenger/apache alternatives and permutations, plus some hard data posted based on some decent tests.
You'll also find a nice packaged RoR/Ubuntu stack in the shape of the ec2onrails image (google ec2onrails) - it's for running on the amazon EC2 cloud but it's got a lot of nice stuff in there + capistrano tasks. Currently it's based on apache, but the version in progress is looking at the alternatives. No reason you couldn't use the same build script for a non EC2 server.
If your problem is simply the initial requests, try warming your server up before considering it live (e.g. by running a script to automatically exercise the basic operations).
Oh and I should add - are you sure the problem is your stack? More likely it is your code. It may be worth seeing where your bottlenecks are first and what you can get out of caching, improved queries and indexing, and especially memcached before tweaking anything else.
Well you could get a big speed boost by switching to Ubuntu 9.04 or even 8.10
I personally use nginx+passenger on my ubuntu stack. and use sphinx instead of ferret as well
Related
I've been using Ruby Enterprise Edition and Passenger (for Apache, since I run Apache anyway for other things) for some time, but I'm wondering if there's a new trend about what to use on servers nowadays.
For example I've heard about Thin, Unicorn... I also know that 1.9.2 is faster than REE, but I wonder about RAM consumption. I'd rather have it consume less RAM even at the expense of some speed.
Thanks for all advice.
If you want minimal memory you should try Thin.
It does not have master worker as Unicorn or Passenger, thus uses less memory.
Suppose you have a very small app that needs to run on a small VM, then you can use 1 thin worker + nginx. I ran several rails 3.2 apps using Thin+nginx+postgres on 256MB VMs without swapping.
Unicorn is faster but it needs a master worker. It's good if you want to run on Heroku, you can set it 2 or 3 workers and be within the 512MB limit.
If your app is very big and you have too many long running requests, I would check out jRuby and Thinidad/Torquebox.
I converted a few apps from MRI+Sidekiq to jruby+Trinidad+Trinidad_Scheduler. I get about 100-200 req/sec using a pool of 50 threads in a trinidad server!
What I like about jRuby is that you can combine everything on one Rails Server. You can put together on the same JavaVM the cache_store with EHcache, Scheduling, Background processing and real multithreading.
You don't need to run redis, memcached, resque or sidekiq separately.
Im not saying they are not good, I love sidekiq and resque, but you can decrease your complexity by combining everything on one process and have high concurrency.
A more advanced and Enterprise solution is Torquebox, it has support for clustering and is super scalable. But I've had problems with my app crashing on torquebox, so i'm sticking to Trinidad for now.
The disadvantages of jRuby? MEmory! A Trinidad server will use minimum 512MB, up to 2-3GB ram.
Also, for Single Thread server, a single request from a rails app running Ruby-1.9.3 is about twice as fast as the same request on jRuby.
Another option is Puma, you can get full multithreading on MRI with puma. I myself could not get it stable enough on my apps.
So, it all depends on your requirements, memory usage, full threading and concurrency.
Apart from Passenger, have a look at Unicorn, Trinidad, Puma and Torquebox. Those seems to be the top rails servers right now.
There is an great book with an introduction of converting your Rails app to jRuby and deploy your app using several methods such as trinidad.
http://pragprog.com/book/jkdepj/deploying-with-jruby
The Torquebox Documentation is amazingly good. It's very detailed and explains really good how to use all Torquebox features.
http://torquebox.org/documentation/
I Hope that sharing my experience has helped.
Passenger is still extremely strong, especially being REE will naturally support 1.9 in the near future. The fact that your application can crash, however it won't affect anything else on your machine is an amazing feature to have. Deploying code is extremely easy because the server will continue to accept connections, which means less frustration/stress for you.
However, in terms of comparisons:
Here is a great resource is check out various comparisons(including memory consumption) with all the new servers.
It compares Thin, Unicorn, Passenger, TorqueBox, Glassfish, and Trinidad:
http://torquebox.org/news/2011/03/14/benchmarking-torquebox-round2/
Mike Lewis' link does a good job of comparing those different ruby servers. My personal experience has been with nginx/REE/Passenger and its been good. I haven't tried the others, so I can't comment on that.
However, I can speak on RAM usage. Your biggest savings of RAM will come from using 32-bit servers. In my experience (3x 3GB app servers), 64-bit REE/passenger processes took up to 2x as much RAM as their 32-bit counterparts. We saw a significant performance increase moving from 64 to 32 bit servers, everything else staying the same. Unless your application requires 64-bit, I would suggest running your application servers (not database) in 32-bit.
Passenger is still a very good choice to use so you are not behind the times or anything. It is also actively supported and has a very good development team that contributes a lot to the community. We have been using Unicorn and it has been very good. Our favorite functionality is to be able to upgrade apps/ruby/nginx without dropping a connection.
I have been working to deploy a relatively large Rails app (Rails 2.3.5) and recently doing some load testing we discovered that the throughput for the site is way below the expected level of traffic.
We were running on a standard 32bit server, 3GB of RAM with Centos, and we were running Ruby Enterprise Edition (latest build), Passenger (Latest build) and Nginx (Latest build) - when there is only one or two users the site runs fine (as you would expect) however when we try to ramp up the load to ~50 concurrent requests it completely dies. (Apache Bench report ~2.3 req/sec, which is terrible)
We are running RPM and trying to determine where the load issue is, but it's pretty evenly distributed across Rails, SQL and Memcached, so we're more or less going through and optimizing the codebase.
Out of sheer desperation we spun up a large EC2 instance (Ubuntu 9.10, 7.5GB RAM, 2 Compute Units/Cores) and setup the same configuration as the original server, and while there are more resources we were still seeing pathetic results.
So, after spending too much time trying to optimize, playing with caching configuration etc I decided to test the throughput of some mongrels, and ta-da, they are performing much much better then Passenger.
Currently the configuration is 15x Mongrels being proxied via Nginx, and we seem to be meeting our load requirements just but it's not quite enough to make me comfortable with going live... What I was wondering is if anyone knows of some possible causes for this...?
My configuration for passenger/nginx was:
Nginx workers: tried between 1 and 10, usually three though.
Passenger max pool size: 10 - 30 (yes, these numbers are quite high)
Passenger global queueing: tried both on and off.
NGinx GZip on: yes
It might pay to note that we had increased the nginx max client body size to 200m to allow for large file uploads.
Anyway suggestions would be really appreciated, while the mongrels are working fine it changes how we do things a lot and I would really prefer to use Passenger - besides, wasn't it supposed to make this easier and perform better?
Maybe your sql pool size is too small? This essentially limits the parallelism of database workloads in your application which in turn builds up to much increased load as soon as you put work on your app stack...
As a first step I would deploy a minimal "Hello World" type Rails application to your environment and see what throughput you get with that. Doing that will at least tell you if your problem is with the environment or somewhere in your application.
Very soon I plan on deploying my first Ruby on Rails application to a production environment and I've even picked a webhost with all the managed server and Capistrano goodness you'd expect from a RoR provider.
The provider allows for Mongrel, Thin, Passenger & FastCGI web servers, which seems very flexible, but I honestly don't know the differences between them. I have looked into them some, but it all gets a bit much when they start talking about features and maximum simultaneous requests - and that this data seems to vary depending on who's publishing it.
I have looked at Passenger (on the surface) - which does seem very appealing to me - but I was under the impression that Passenger wasn't the actual webserver, and instead was more like a layer on top of Apache or nginx and managed spawned instances of the application (like a Mongrel cluster).
Can anyone please set me straight with the differences in layman's terms so as I can choose wisely (because anyone who's seen Indiana Jones and the Last Crusade knows what happens if you choose poorly).
Short answer
Go with Apache/Nginx + Passenger. Passenger is fast, reliable, easy to configure and deploy. Passenger has been adopted by a large number of big Rails applications, including Shopify.
(source: modrails.com)
The long answer
Forget about CGI and FastCGI. In the beginning there were no other alternatives so the only way to run Rails was using CGI or the faster browser FastCGI. Nowadays almost nobody runs Rails under CGI. The latest Rails versions no longer provides .cgi and .fcgi runners.
Mongrel has been a largely adopted solution, the best replacement for CGI and FCGI. Many sites still use Mongrel and Mongrel cluster, however Mongrel project is almost dead and many projects already moved to other solutions (mostly Passenger).
Also, a Mongrel based architecture is quite hard to configure because it needs a frontend proxy (thin, ngnix) and a backend architecture composed of multiple Mongrel instances.
Passenger has been gaining widespread attention since it was released. Many projects switched from Mongrel to Passenger for many reasons, including (but not limited to) easy deployment, maintainability and performance. Additionally, Passenger is now available for both Apache and Ngnix.
The simplest way to use Passenger is the Apache + Passenger configuration. One Apache installation and multiple Passenger processes.
If you need better performance and scalability, you can use Ngnix as a frontend proxy and forward all Rails requests to multiple backend servers, each one composed of Apache + Passenger.
I'm not going into the technical details here, this solution is intended to be used by Rails projects with an high level of traffic.
Even more complex solutions include a combination of different levels including http proxies and servers. You can have an idea of what I'm talking about reading some internal details from GitHub and Heroku.
Right now, Passenger is the best answer for most Rails projects.
Mongrel and Thin are single ruby process servers that you would run multiple of as a cluster behind some type of proxy (like Apache or Nginx). The proxy would manage which instance of Mongrel or Thin services the requests.
Passenger creates an interface between Apache or Nginx that creates an application spawning process and then forks out processes to server up incoming requests as they come in. There are a lot of configuration options for how long those processes live, how many there can be, and how many requests they will serve before they die. This is by far the most common way to scale up and handle a high traffic application, but it is not without drawbacks. This can only be done on a *nix operating system (linux, mac os x, etc). Also, these processes spin up on demand, so if no one accesses your site for a while, they processes die and the next request has the delay of it starting back up again. With Mongrel and Thin, the process is always running. Sometimes though, your processes being new and fresh can be a good thing for memory usage etc.
If it is going to be a relatively low traffic site, Mongrel or Thin provides a simple, easy to manage way to deploy the application. For higher traffic sites where you need the smart queuing and process management of something like Passenger, it is a very good solution.
As for fastcgi, you probably want to use that as a last option.
I use Passenger + nginx. It works really, really well.
To get some instant performance boast with passenger, I recommend using ruby enterprise edition.
I read from some books that Phusion Passenger is the answer to easy Ruby on Rails deployment. But my friend said that first there was Apache + bunch of Mongrels, and then lighttpd, and then nginx, and now Passenger, and it seems endless...
he also said he uses dreamhost which uses Passenger, and sometimes he sees his request not being processed.
So I wonder if Passenger is the final answer to RoR deployment? do you use it and used the "ab" command to test if the site is doing quite well?
short answer: yes.
long answer: yeeeeeeeeeeeeeeesssssssssssssssss.
In all seriousness, Phusion Passenger and Ruby Enterprise Edition have taken out pretty much all of the pain of moving a Rails app into production. Previous approaches, including running a suite of Mongrels, required lots of setup surrounding starting, stopping, and recycling listener processes that Passenger handles transparently, or via simple Apache (or nginx) configuration options. And REE's complementary garbage collector means that forking off a new listener uses MUCH less memory, and is faster to boot (in Passenger's "smart" spawning mode).
Edit: #srboisvert makes a very good point; Passenger isn't the final answer to RoR deployment, but for now it's my favorite by far. One day, after a lot of hard engineering problems are solved, mainstream Ruby will probably move from hosting RoR using a multi-process model to a single-process model, which would make management even easier than with Passenger.
It's the best solution so far. I started deploying with FCGI and it was a pain. Then came mongrel and it was better. Then came mod_rails and it was WAY better.
Also a lot of large cool application are migrating to mod_rails including some by 37signals, so you know that's good.
I'll just end with a quote from DHH:
The one-piece solution with Phusion
Passenger
Once you've completed the incredibly
simple installation, you get an Apache
that acts as both web server, load
balancer, application server and
process watcher. You simply drop in
your application and touch
tmp/restart.txt when you want to
bounce it and bam, you're up and
running.
But somehow the message of Passenger
has been a little slow to sink in.
There's already a ton of big sites
running off it. Including Shopify,
MTV, Geni, Yammer, and we'll be moving
over first Ta-da List shortly, then
hopefully the rest of the 37signals
suite quickly thereafter.
So while there are still reasons to
run your own custom multi-tier setup
of manually configured pieces, just
like there are people shying away from
mod_php for their particulars, I think
we've finally settled on a default
answer. Something that doesn't require
you to really think about the first
deployment of your Rails application.
Something that just works out of the
box. Even if that box is a shared
host!
In conclusion, Rails is no longer hard
to deploy. Phusion Passenger has made
it ridiculously easy.
(via)
Yes, it is the easiest, fastest and most efficient solution.
After a lot of problems with gems like soap4r etc. had been resolved in recent releases, Passenger is the answer to deployment questions now.
We're running Apache/mod_rails in a balanced environment with HAProxy in front of 2 servers. It's much more reliable than our previous setup using Mongrel/Aapache.
It's very easy to take control over
the amount of Passenger processes running in Apache
the amount of Passenger processes running per application
and all that without the pain of tweaking a number of config files like mod_proxy, Apache.
setting up a virtual host and adding 3 lines to your Apache config is basically enough to get it running
Matt
Final Answer? Nothing is ever the final answer.
I'd say Passenger is the current answer though.
Yes. I've been running Nginx/Passenger in front of Apache for whatever still needs PHP since they released 2.2.0 a few weeks back. Especially with Ruby Enterprise Edition, it approaches what I would call "perfect".
I guess that now people will stick to mod_rails for many years. The module is really good. Configuration is dead simple. It will be hard to replace it with some better solution. Similar to mod_php. The only key component which is missing: Windows port.
In some situations (enterprise, etc) the JVM can also be a good option.
I have a simple Rails app deployed on a 500 MB Slicehost VPN. I'm the only one who uses the app. When I run it on my laptop, it's fast enough. But the deployed version is insanely slow. It take 6 to 10 seconds to load the login screen.
I would like to find out why it's so slow. Is it my code? (Don't think so because it's much faster locally, but maybe.) Is it Slicehost's server being overloaded? Is it the Internet?
Can someone suggest a technique or set of steps I can take to help narrow down the cause of this problem?
Update:
Sorry forgot to mention. I'm running it under CentOS 5 using Phusion Passenger (AKA mod_rails or mod_rack).
If it is just slow on the first time you load it is probably because of passenger killing the process due to inactivity. I don't remember all the details but I do recall reading people who used cron jobs to keep at least one process alive to avoid this lag that can occur with passenger needed to reload the environment.
Edit: more details here
Specifically - pool idle time defaults to 2 minutes which means after two minutes of idling passenger would have to reload the environment to serve the next request.
First, find out if there's a particularly slow response from the server. Use Firefox and the Firebug plugin to see how long each component (including JavaScript and graphics) takes to download. Assuming the main page itself is what is taking all the time, you can start profiling the application. You'll need to find a good profiler, and as I don't actually work in Ruby on Rails, I can't suggest any: google "profile ruby on rails" for some options.
As YenTheFirst points out, the server software and config you're using may contribute to a slowdown, but A) slicehost doesn't choose that, you do, as Slicehost just provides very raw server "slices" that you can treat as dedicated machines. B) you're unlikely to see a script that runs instantly suddenly take 6 seconds just because it's running as CGI. Something else must be going on. Check how much RAM you're using: have you gone into swap? Is the login slow only the first time it's hit indicating some startup issue, or is it always that slow? Is static content served slow? That'd tend to mean some network issue (either on the Slicehost side, or your local network) is slowing things down, assuming you're not in swap.
When you say "fast enough" you're being vague: does the laptop version take 1 second to the Slicehost 6? That wouldn't be entirely surprising, if the laptop is decent: after all, the reason slices are cheap is because they're a fraction of a full server. You're using probably 1/32 of an 8 core machine at Slicehost, as opposed to both cores of a modern laptop. The Slicehost cores are quick, but your laptop could be a screamer compared to 1/4 of core. :)
Try to pint point where the slowness lies
1/ application is slow, or infrastructure (network + web server)
put a static file on your web server, and access it through your browser
2/ If it is fast, it is probable a problem with application + server configuration.
database access is slow
try a page with a simpel loop: is it slow?
3/ If it slow, it is probably your infrastructure. You can check:
bad network connection: do a packet capture (with Wireshark for example) and look for retransmissions, duplicate packets, etc.
DNS resolution is slow?
server is misconfigured?
etc.
What is Slicehost using to serve it?
Fast options are things like: Mongrel, or apache's mod_rails (also called passenger phusion or
something like that)
These are dedicated servers (or plugins to servers) which run an instance of your rails app.
If your host isn't using that, then it's probably defaulting to CGI. Rails comes with a simple CGI script that will serve the page, but it reloads the app for every page.
(edit: I suspect that this is the most likely case, that your app is running off of the CGI in /webapp_directory/public/dispatch.cgi, which would explain the slowness. This tends to be a default deployment on many hosts, since it doesn't require extra configuration on their part, but it doesn't give good performance)
If your host supports "Fast CGI", rails supports that too. Fast CGI will open a CGI session, and keep it open for multiple pages, so you get much better performance, but it's not nearly as good as Mongrel or mod_rails.
Secondly, is it in 'production' or 'development' mode? The easy way to tell is to go to a page in your app that gives an error. If it shows you a stack trace, it's in development mode, which is slower than production mode. Mongrel and mod_rails have startup options to determine whether to run the app in production or development mode.
Finally, if your database is slow for whatever reason, that will be a big bottleneck as well. If you do have a good deployment (Mongrel/mod_rails/etc.) in production mode, try looking into that.
Do you have a lot of data in your DB? I would double check that you have indexed all the appropriate columns- because this can make a huge difference. On your local dev system, you probably have a lot more memory than on your 500 mb slice, which would result in the DB running a lot slower if you have big, un indexed tables. You can also run the slow queries logger in MySql to pinpoint columns without indexes.
Other than that, yes- passenger will need to spool up a process for you if you have not been using the site recently. If this is the case, you should see a significant speed increase on second, and especially third and later page loads.
You might want to run a local virtual machine with 500 MB. Are you doing a lot of client-server interaction? Delays over the WAN are significant
You might want to check out RPM (there's a free "lite" version too) and/or New Relic's Tune Up.
Your CPU time is guaranteed by Slicehost using the Xen virtualization system, so it's not that. Don't have the other answers for you, sorry! Might try 'top' on a console while you're trying to access the page.
If you are using FireFox and doing localhost testing (or maybe even on LAN) you may want to try editing the network.dns.disableIPv6 setting.
Type about:config in the address bar and filter for network.dns.disableIPv6 and double-click to set to true.
This bug has been reported mainly from Vista OS's, but some others as well.
You could try running 'top' when you SSH in to see which process is heavy. If you also have problems logging you, perhaps you may try getting Statistics in the Slicehost manager.
If you discover it is MySQL's fault, consider decreasing the number of servers it can spawn.
512 seems decent for Rails application, you might have to check if you misconfigured too.