Rails Server Memory Leak/Bloating Issue - ruby-on-rails

We are running 2 rails application on server with 4GB of ram. Both servers use rails 3.2.1 and when run in either development or production mode, the servers eat away ram at incredible speed consuming up-to 1.07GB ram each day. Keeping the server running for just 4 days triggered all memory alarms in monitoring and we had just 98MB ram free.
We tried active-record optimization related to bloating but still no effect. Please help us figure out how can we trace the issue that which of the controller is at fault.
Using mysql database and webrick server.
Thanks!

This is incredibly hard to answer, without looking into the project details itself. Though I am quite sure you won't be using Webrick in your target production build(right?), so check if it behaves the same under Passenger or whatever is your choice.
Also without knowing the details of the project I would suggest looking at features like generating pdfs, csv parsing, etc. Seen a case, where generating pdf files have been eating resources in a similar fashion, leaving like 5mb of not garbage collected memory for each run.
Good luck.

Related

Issue with passenger

Recently our ruby on rails application was upgraded to 2.3.8. We also replaced Mongrel/Mongrel cluster with Phusion Passenger during upgrade.
Whenever we try to deploy our application, it seems to be responding faster initially but response time gradually increases. We also noticed that cpu usage on the database box spiking to 400% and lot of requests waiting in global queue. This seems to be happening only in our production environment.
Can anyone let me know how I should go about debugging this issue?
Is there anyway we can restrict the number of connections between passenger and DB?
Also is there a way we can setup connection pooling in passenger?
Thanks,
Sivakumar.
I don't think the issue is with Passenger. If you are having a large spike on your DB box CPU, your issue probably lies there. Without more information about your database it's hard to give you specifics, but here are a few things you could try:
Run top and check how much of your total memory the mysqld or other database process is consuming. If this amount is not high then you probably need to tune MySQL settings to take advantage of the RAM on your DB box.
Analyze your running database queries with the mytop command. You might have some queries that are hogging all of your system resources or causing lots of swapping.
Look through your MySQL slow log and see if you have queries that are taking over 1 second to run.
Check your database engines. Are you using MYISAM and/or InnoDB? You might need to tune your database differently so that it can allocate the proper amount of memory and resources to each engine.
Consult a DBA. They'll be able to take a look at your usage and application and tell you more definitively if the issue lies with your database or application.
P.S: I would recommend upgrading to Passenger 3, it performs better than Passenter 2.

Rails (3) server - what to use nowadays?

I've been using Ruby Enterprise Edition and Passenger (for Apache, since I run Apache anyway for other things) for some time, but I'm wondering if there's a new trend about what to use on servers nowadays.
For example I've heard about Thin, Unicorn... I also know that 1.9.2 is faster than REE, but I wonder about RAM consumption. I'd rather have it consume less RAM even at the expense of some speed.
Thanks for all advice.
If you want minimal memory you should try Thin.
It does not have master worker as Unicorn or Passenger, thus uses less memory.
Suppose you have a very small app that needs to run on a small VM, then you can use 1 thin worker + nginx. I ran several rails 3.2 apps using Thin+nginx+postgres on 256MB VMs without swapping.
Unicorn is faster but it needs a master worker. It's good if you want to run on Heroku, you can set it 2 or 3 workers and be within the 512MB limit.
If your app is very big and you have too many long running requests, I would check out jRuby and Thinidad/Torquebox.
I converted a few apps from MRI+Sidekiq to jruby+Trinidad+Trinidad_Scheduler. I get about 100-200 req/sec using a pool of 50 threads in a trinidad server!
What I like about jRuby is that you can combine everything on one Rails Server. You can put together on the same JavaVM the cache_store with EHcache, Scheduling, Background processing and real multithreading.
You don't need to run redis, memcached, resque or sidekiq separately.
Im not saying they are not good, I love sidekiq and resque, but you can decrease your complexity by combining everything on one process and have high concurrency.
A more advanced and Enterprise solution is Torquebox, it has support for clustering and is super scalable. But I've had problems with my app crashing on torquebox, so i'm sticking to Trinidad for now.
The disadvantages of jRuby? MEmory! A Trinidad server will use minimum 512MB, up to 2-3GB ram.
Also, for Single Thread server, a single request from a rails app running Ruby-1.9.3 is about twice as fast as the same request on jRuby.
Another option is Puma, you can get full multithreading on MRI with puma. I myself could not get it stable enough on my apps.
So, it all depends on your requirements, memory usage, full threading and concurrency.
Apart from Passenger, have a look at Unicorn, Trinidad, Puma and Torquebox. Those seems to be the top rails servers right now.
There is an great book with an introduction of converting your Rails app to jRuby and deploy your app using several methods such as trinidad.
http://pragprog.com/book/jkdepj/deploying-with-jruby
The Torquebox Documentation is amazingly good. It's very detailed and explains really good how to use all Torquebox features.
http://torquebox.org/documentation/
I Hope that sharing my experience has helped.
Passenger is still extremely strong, especially being REE will naturally support 1.9 in the near future. The fact that your application can crash, however it won't affect anything else on your machine is an amazing feature to have. Deploying code is extremely easy because the server will continue to accept connections, which means less frustration/stress for you.
However, in terms of comparisons:
Here is a great resource is check out various comparisons(including memory consumption) with all the new servers.
It compares Thin, Unicorn, Passenger, TorqueBox, Glassfish, and Trinidad:
http://torquebox.org/news/2011/03/14/benchmarking-torquebox-round2/
Mike Lewis' link does a good job of comparing those different ruby servers. My personal experience has been with nginx/REE/Passenger and its been good. I haven't tried the others, so I can't comment on that.
However, I can speak on RAM usage. Your biggest savings of RAM will come from using 32-bit servers. In my experience (3x 3GB app servers), 64-bit REE/passenger processes took up to 2x as much RAM as their 32-bit counterparts. We saw a significant performance increase moving from 64 to 32 bit servers, everything else staying the same. Unless your application requires 64-bit, I would suggest running your application servers (not database) in 32-bit.
Passenger is still a very good choice to use so you are not behind the times or anything. It is also actively supported and has a very good development team that contributes a lot to the community. We have been using Unicorn and it has been very good. Our favorite functionality is to be able to upgrade apps/ruby/nginx without dropping a connection.

Passenger hosted Rails app *painfully* slow, but the server is a beast

I have been working to deploy a relatively large Rails app (Rails 2.3.5) and recently doing some load testing we discovered that the throughput for the site is way below the expected level of traffic.
We were running on a standard 32bit server, 3GB of RAM with Centos, and we were running Ruby Enterprise Edition (latest build), Passenger (Latest build) and Nginx (Latest build) - when there is only one or two users the site runs fine (as you would expect) however when we try to ramp up the load to ~50 concurrent requests it completely dies. (Apache Bench report ~2.3 req/sec, which is terrible)
We are running RPM and trying to determine where the load issue is, but it's pretty evenly distributed across Rails, SQL and Memcached, so we're more or less going through and optimizing the codebase.
Out of sheer desperation we spun up a large EC2 instance (Ubuntu 9.10, 7.5GB RAM, 2 Compute Units/Cores) and setup the same configuration as the original server, and while there are more resources we were still seeing pathetic results.
So, after spending too much time trying to optimize, playing with caching configuration etc I decided to test the throughput of some mongrels, and ta-da, they are performing much much better then Passenger.
Currently the configuration is 15x Mongrels being proxied via Nginx, and we seem to be meeting our load requirements just but it's not quite enough to make me comfortable with going live... What I was wondering is if anyone knows of some possible causes for this...?
My configuration for passenger/nginx was:
Nginx workers: tried between 1 and 10, usually three though.
Passenger max pool size: 10 - 30 (yes, these numbers are quite high)
Passenger global queueing: tried both on and off.
NGinx GZip on: yes
It might pay to note that we had increased the nginx max client body size to 200m to allow for large file uploads.
Anyway suggestions would be really appreciated, while the mongrels are working fine it changes how we do things a lot and I would really prefer to use Passenger - besides, wasn't it supposed to make this easier and perform better?
Maybe your sql pool size is too small? This essentially limits the parallelism of database workloads in your application which in turn builds up to much increased load as soon as you put work on your app stack...
As a first step I would deploy a minimal "Hello World" type Rails application to your environment and see what throughput you get with that. Doing that will at least tell you if your problem is with the environment or somewhere in your application.

Can i limit apache+passenger memory usage on server without swap space

i'm running a rails application with apache+passenger on virtual servers that do not have any swap space configured.
The site gets decent amount of traffic with 200K+ daily requests and sometimes the whole system runs out of memory causing odd behaviour on whole system.
The question is that is there any way to configure apache or passenger not to run out of memory (e.g. gracefully restarting passenger instances when they start using, say more than 300M of memory).
Servers have 4GB of memory and currently i'm using passenger's PassengerMaxRequests option but it does not seem to be the most solid solution here.
At the moment, i also cannot switch to nginx so that is not an option to preserve some room.
Any clever ideas i'm probably missing are welcome.
Edit: My temporary solution
I did not go with restarting Rails instances when they exceed certain amount of memory usage. Engine Yard wrote great blog post on the ActiveRecord memory bloat issue. This is our main suspect on the subject. As i did not have much time to optimize application, i set PassengerMaxRequests to 300 and added extra 2GB memory to server. Things have been good since then. At first i was worried that re-starting Rails instances continuously makes it slow but it does not seem to have impact i should worry about.
If you mean "limiting" as killing those processes and if this is the only application on the server and it is a Linux, then you have two choices:
Set maximum amount of memory one process can have:
# ulimit -m
unlimited
Or use cgroups for similar behavior:
http://en.wikipedia.org/wiki/Cgroups
I would advise against restarting instances (if that is possible) that go over the "memory limit", because that may put your system in infinite loops where a process repeatedly reaches that limit and restarts.
Maybe you could write a simple daemon that constantly watches the processes, and kills any that go over a certain amount of memory. Be sure to log any information about the process that did this so you can fix the issue whenever it comes up.
I'd also look into getting some real swap space on there... This seems like a bad hack.
I have a problem where passenger processes end up going out of control and consuming too much memory. I wrote the following script which has been helping to keep things under control until I find the real solution http://www.codeotaku.com/blog/2012-06/passenger-memory-check/index. It might be helpful.
Passenger web instances don't contain important state (generally speaking) so killing them isn't normally a process, and passenger will restart them as and when required.
I don't have a canned solution but you might want to use two commands that ship with Passenger to keep track of memory usage and nr of processes: passenger-status and sudo passenger-memory-stats, see
Passenger users guide for Nginx or
Passenger users guide for Apache.

How can I find out why my app is slow?

I have a simple Rails app deployed on a 500 MB Slicehost VPN. I'm the only one who uses the app. When I run it on my laptop, it's fast enough. But the deployed version is insanely slow. It take 6 to 10 seconds to load the login screen.
I would like to find out why it's so slow. Is it my code? (Don't think so because it's much faster locally, but maybe.) Is it Slicehost's server being overloaded? Is it the Internet?
Can someone suggest a technique or set of steps I can take to help narrow down the cause of this problem?
Update:
Sorry forgot to mention. I'm running it under CentOS 5 using Phusion Passenger (AKA mod_rails or mod_rack).
If it is just slow on the first time you load it is probably because of passenger killing the process due to inactivity. I don't remember all the details but I do recall reading people who used cron jobs to keep at least one process alive to avoid this lag that can occur with passenger needed to reload the environment.
Edit: more details here
Specifically - pool idle time defaults to 2 minutes which means after two minutes of idling passenger would have to reload the environment to serve the next request.
First, find out if there's a particularly slow response from the server. Use Firefox and the Firebug plugin to see how long each component (including JavaScript and graphics) takes to download. Assuming the main page itself is what is taking all the time, you can start profiling the application. You'll need to find a good profiler, and as I don't actually work in Ruby on Rails, I can't suggest any: google "profile ruby on rails" for some options.
As YenTheFirst points out, the server software and config you're using may contribute to a slowdown, but A) slicehost doesn't choose that, you do, as Slicehost just provides very raw server "slices" that you can treat as dedicated machines. B) you're unlikely to see a script that runs instantly suddenly take 6 seconds just because it's running as CGI. Something else must be going on. Check how much RAM you're using: have you gone into swap? Is the login slow only the first time it's hit indicating some startup issue, or is it always that slow? Is static content served slow? That'd tend to mean some network issue (either on the Slicehost side, or your local network) is slowing things down, assuming you're not in swap.
When you say "fast enough" you're being vague: does the laptop version take 1 second to the Slicehost 6? That wouldn't be entirely surprising, if the laptop is decent: after all, the reason slices are cheap is because they're a fraction of a full server. You're using probably 1/32 of an 8 core machine at Slicehost, as opposed to both cores of a modern laptop. The Slicehost cores are quick, but your laptop could be a screamer compared to 1/4 of core. :)
Try to pint point where the slowness lies
1/ application is slow, or infrastructure (network + web server)
put a static file on your web server, and access it through your browser
2/ If it is fast, it is probable a problem with application + server configuration.
database access is slow
try a page with a simpel loop: is it slow?
3/ If it slow, it is probably your infrastructure. You can check:
bad network connection: do a packet capture (with Wireshark for example) and look for retransmissions, duplicate packets, etc.
DNS resolution is slow?
server is misconfigured?
etc.
What is Slicehost using to serve it?
Fast options are things like: Mongrel, or apache's mod_rails (also called passenger phusion or
something like that)
These are dedicated servers (or plugins to servers) which run an instance of your rails app.
If your host isn't using that, then it's probably defaulting to CGI. Rails comes with a simple CGI script that will serve the page, but it reloads the app for every page.
(edit: I suspect that this is the most likely case, that your app is running off of the CGI in /webapp_directory/public/dispatch.cgi, which would explain the slowness. This tends to be a default deployment on many hosts, since it doesn't require extra configuration on their part, but it doesn't give good performance)
If your host supports "Fast CGI", rails supports that too. Fast CGI will open a CGI session, and keep it open for multiple pages, so you get much better performance, but it's not nearly as good as Mongrel or mod_rails.
Secondly, is it in 'production' or 'development' mode? The easy way to tell is to go to a page in your app that gives an error. If it shows you a stack trace, it's in development mode, which is slower than production mode. Mongrel and mod_rails have startup options to determine whether to run the app in production or development mode.
Finally, if your database is slow for whatever reason, that will be a big bottleneck as well. If you do have a good deployment (Mongrel/mod_rails/etc.) in production mode, try looking into that.
Do you have a lot of data in your DB? I would double check that you have indexed all the appropriate columns- because this can make a huge difference. On your local dev system, you probably have a lot more memory than on your 500 mb slice, which would result in the DB running a lot slower if you have big, un indexed tables. You can also run the slow queries logger in MySql to pinpoint columns without indexes.
Other than that, yes- passenger will need to spool up a process for you if you have not been using the site recently. If this is the case, you should see a significant speed increase on second, and especially third and later page loads.
You might want to run a local virtual machine with 500 MB. Are you doing a lot of client-server interaction? Delays over the WAN are significant
You might want to check out RPM (there's a free "lite" version too) and/or New Relic's Tune Up.
Your CPU time is guaranteed by Slicehost using the Xen virtualization system, so it's not that. Don't have the other answers for you, sorry! Might try 'top' on a console while you're trying to access the page.
If you are using FireFox and doing localhost testing (or maybe even on LAN) you may want to try editing the network.dns.disableIPv6 setting.
Type about:config in the address bar and filter for network.dns.disableIPv6 and double-click to set to true.
This bug has been reported mainly from Vista OS's, but some others as well.
You could try running 'top' when you SSH in to see which process is heavy. If you also have problems logging you, perhaps you may try getting Statistics in the Slicehost manager.
If you discover it is MySQL's fault, consider decreasing the number of servers it can spawn.
512 seems decent for Rails application, you might have to check if you misconfigured too.

Resources