Apache on WIndows - growing VM Size of httpd.exe - ruby-on-rails

I have a Rails app running with Apache on a Windows 2003 server . I am using the Apache Lounge version of apache.
Httpd.exe process's Mem usage and VM Size is growing constantly and quite fast, while there is not much load. Most alarming is the VM size, as it appears to be growing at a much faster pace, reaching several GB in a couple of days, while Mem usage may reach several hundreds MB during the same period of time. This finally results in the app crashing.
I am trying to find an explanation for the rate of VM size growth and a way to stop it growing.

Tried adding 'SSLSessionCache none' to httpd.conf and it solved the problem for me! Now httpd.exe's Mem Usage/VM Size do not seem to grow at all.

This is a normal thing, because the apache moded to Rails do this as a normal thing, but there is a project that reduces this, ModRails: http://www.modrails.com/ , it's the best to use Rails with Apache, but i don't use it, because I'm using Mongrel.

Related

Reducing Redmine's memory usage - Low Hanging Fruit

I am running a Redmine instance with Passenger and Nginx. With only a handful of issues in the database, Redmine consumes over 80mb of RAM.
Can anyone share tips for reducing Redmine's memory usage. The Redmine instance is used by 3 people and I am willing to sacrifice on speed.
There are not really and low hanging fruits. And if there were, we would've already included and activated them by default.
80 MB RSS (as opposed to virtual size which can be much more) is actually pretty good. In normal operation, it will use between 70 and 120 MB RSS per process (depending on the deployment model, rather few on passenger).
As andrea suggested, you can reduce your overall memory footprint by about one third when you use REE (Ruby Enterprise Edition, which is also free). But this saving can only achieved when you run more than one process (each requiring the above memory). REE achieves this saving by optimizing Ruby for a technique called Copy on Write, so that additional application processes take less memory.
So I'm sorry, your (hypothetical) 128 MB vServer will probably not suffice. For a small installation, you might be able to squeeze a minimal installation into 256MB, but it only starts to be anything but a complete pain in the ass at 512 MB (including database).
That's because of how Rails applications work in contrast to things like PHP. They require a running application server instance. That instance is typically able to answer one request at a time, using about the same amount of memory all the time. So your memory consumption is roughly equivalent to the number of application processes you run, independent of actual load. But if you tune your system properly, you can get quite a number of reqs/s out of one process.
May be i am replying very late but i got stuck in the same issue and I found a link to optimize ruby/rails memory usage, which works for me
http://community.webfaction.com/questions/2476/how-can-i-reduce-my-rubyrails-memory-usage-when-running-redmine
It may be helpful for someone else.

Delayed Jobs leaking memory?

I'm using collectiveidea's delayed_job with my Ruby on Rails app (v2.3.8), and running about 40 background jobs with it on an 8GB RAM Slicehost machine (Ubuntu 10.04 LTS, Apache 2).
Let's say I ssh into my server with no workers running. When I do free -m, I'm see I'm generally using about 1GB of RAM out of 8. Then after starting the workers and waiting about a minute for them to be utilized by the code, I'm up to about 4GB. If I come back in an hour or two, I'll be at 8GB and into the swap memory, and my website will be generating 502 errors.
So far I've just been killing the workers and restarting them, but I'd rather fix the root of the problem. Any thoughts? Is this a memory leak? Or, as a friend suggested, do I need to figure out a way to run garbage collection?
Actually, Delayed::Job 3.0 leaks memory in Ruby 1.9.2 if your models have serialized attributes. (I'm in the process of researching a solution.)
Here's someone who seemed to have solved it, http://spacevatican.org/2012/1/26/memory-leak-in-yaml-on-ruby-1-9-2
Here's the issue from Delayed::Job https://github.com/collectiveidea/delayed_job/issues/336
Just about every time someone asks about this, the problem is in their code. Try using one of the available profiling tools to find where your job is leaking. ( https://github.com/wycats/ruby-prof or similar.)
Triggering GC at the end of each job will reduce your max memory usage at the cost of thrashing your throughput. It won't stop Ruby from bloating to the max size required by any individual job, however, since Ruby can't free memory back to the OS. I don't recommend taking this approach.

Reduce Mongrel Rails Memory Footprint & Increase performance?

My rails sites run Mongrel, I am having a problem with the amount of memory being used. My ruby-bin processes are using up about 66 MB of resident memory. How can I reduce the amount of memory used by rails?
It is not very economical to have many rails servers running on a single machine if they are eating memory at this rate. My php5 fcgi processes sit at between 15-25 MB.
I'm fairly unfamiliar with RoR, would using JRuby help? Any comments helpful in reducing memory footprint and increasing performance are more than welcome.
You might look at Phusion Passenger and Ruby Enterprise Edition, which is the de facto standard setup for Rails apps these days. One of its aims is cutting memory use. It's also simpler than having a bunch of Mongrels.
If you are not tied to apache for something else, i would also try nginx with Phusion Passenger. If you're concerned about memory usage, you should see a smaller footprint from nginx than apache, and the lastest version of Passenger will download, compile and install nginx for you with minimal headaches.
You can also replace your mongrel process with Thin which is more efficient and has recently been patched in its Garbage Collection (thru eventmachine), to make it even better.
We use thin cluster behind nginx frontends with fine results.
I wouldn't go so far as to say Passenger is the de facto standard, but it's gaining a lot of traction. We just switched to Nginx+Passenger, and our ruby app (i.e. Mongrels vs. Passenger) memory footprint dropped from about 450MB to 295MB. It can drop less, as Passenger will kill off procs if they idle (this is configurable), but of course if you're getting traffic and it's using all of the instances you have it configured to, then they'll use up memory accordingly.
Note, that we are not using Ruby Enterprise Edition in our config yet (mainly because it's not yet available at Engine Yard), but we still are seeing a smaller memory footprint. Memory was the initial top reason we made the switch, but there are other benefits, such as faster and easier configuration for scaling up or down, and so on.

Can i limit apache+passenger memory usage on server without swap space

i'm running a rails application with apache+passenger on virtual servers that do not have any swap space configured.
The site gets decent amount of traffic with 200K+ daily requests and sometimes the whole system runs out of memory causing odd behaviour on whole system.
The question is that is there any way to configure apache or passenger not to run out of memory (e.g. gracefully restarting passenger instances when they start using, say more than 300M of memory).
Servers have 4GB of memory and currently i'm using passenger's PassengerMaxRequests option but it does not seem to be the most solid solution here.
At the moment, i also cannot switch to nginx so that is not an option to preserve some room.
Any clever ideas i'm probably missing are welcome.
Edit: My temporary solution
I did not go with restarting Rails instances when they exceed certain amount of memory usage. Engine Yard wrote great blog post on the ActiveRecord memory bloat issue. This is our main suspect on the subject. As i did not have much time to optimize application, i set PassengerMaxRequests to 300 and added extra 2GB memory to server. Things have been good since then. At first i was worried that re-starting Rails instances continuously makes it slow but it does not seem to have impact i should worry about.
If you mean "limiting" as killing those processes and if this is the only application on the server and it is a Linux, then you have two choices:
Set maximum amount of memory one process can have:
# ulimit -m
unlimited
Or use cgroups for similar behavior:
http://en.wikipedia.org/wiki/Cgroups
I would advise against restarting instances (if that is possible) that go over the "memory limit", because that may put your system in infinite loops where a process repeatedly reaches that limit and restarts.
Maybe you could write a simple daemon that constantly watches the processes, and kills any that go over a certain amount of memory. Be sure to log any information about the process that did this so you can fix the issue whenever it comes up.
I'd also look into getting some real swap space on there... This seems like a bad hack.
I have a problem where passenger processes end up going out of control and consuming too much memory. I wrote the following script which has been helping to keep things under control until I find the real solution http://www.codeotaku.com/blog/2012-06/passenger-memory-check/index. It might be helpful.
Passenger web instances don't contain important state (generally speaking) so killing them isn't normally a process, and passenger will restart them as and when required.
I don't have a canned solution but you might want to use two commands that ship with Passenger to keep track of memory usage and nr of processes: passenger-status and sudo passenger-memory-stats, see
Passenger users guide for Nginx or
Passenger users guide for Apache.

How can I find out why my app is slow?

I have a simple Rails app deployed on a 500 MB Slicehost VPN. I'm the only one who uses the app. When I run it on my laptop, it's fast enough. But the deployed version is insanely slow. It take 6 to 10 seconds to load the login screen.
I would like to find out why it's so slow. Is it my code? (Don't think so because it's much faster locally, but maybe.) Is it Slicehost's server being overloaded? Is it the Internet?
Can someone suggest a technique or set of steps I can take to help narrow down the cause of this problem?
Update:
Sorry forgot to mention. I'm running it under CentOS 5 using Phusion Passenger (AKA mod_rails or mod_rack).
If it is just slow on the first time you load it is probably because of passenger killing the process due to inactivity. I don't remember all the details but I do recall reading people who used cron jobs to keep at least one process alive to avoid this lag that can occur with passenger needed to reload the environment.
Edit: more details here
Specifically - pool idle time defaults to 2 minutes which means after two minutes of idling passenger would have to reload the environment to serve the next request.
First, find out if there's a particularly slow response from the server. Use Firefox and the Firebug plugin to see how long each component (including JavaScript and graphics) takes to download. Assuming the main page itself is what is taking all the time, you can start profiling the application. You'll need to find a good profiler, and as I don't actually work in Ruby on Rails, I can't suggest any: google "profile ruby on rails" for some options.
As YenTheFirst points out, the server software and config you're using may contribute to a slowdown, but A) slicehost doesn't choose that, you do, as Slicehost just provides very raw server "slices" that you can treat as dedicated machines. B) you're unlikely to see a script that runs instantly suddenly take 6 seconds just because it's running as CGI. Something else must be going on. Check how much RAM you're using: have you gone into swap? Is the login slow only the first time it's hit indicating some startup issue, or is it always that slow? Is static content served slow? That'd tend to mean some network issue (either on the Slicehost side, or your local network) is slowing things down, assuming you're not in swap.
When you say "fast enough" you're being vague: does the laptop version take 1 second to the Slicehost 6? That wouldn't be entirely surprising, if the laptop is decent: after all, the reason slices are cheap is because they're a fraction of a full server. You're using probably 1/32 of an 8 core machine at Slicehost, as opposed to both cores of a modern laptop. The Slicehost cores are quick, but your laptop could be a screamer compared to 1/4 of core. :)
Try to pint point where the slowness lies
1/ application is slow, or infrastructure (network + web server)
put a static file on your web server, and access it through your browser
2/ If it is fast, it is probable a problem with application + server configuration.
database access is slow
try a page with a simpel loop: is it slow?
3/ If it slow, it is probably your infrastructure. You can check:
bad network connection: do a packet capture (with Wireshark for example) and look for retransmissions, duplicate packets, etc.
DNS resolution is slow?
server is misconfigured?
etc.
What is Slicehost using to serve it?
Fast options are things like: Mongrel, or apache's mod_rails (also called passenger phusion or
something like that)
These are dedicated servers (or plugins to servers) which run an instance of your rails app.
If your host isn't using that, then it's probably defaulting to CGI. Rails comes with a simple CGI script that will serve the page, but it reloads the app for every page.
(edit: I suspect that this is the most likely case, that your app is running off of the CGI in /webapp_directory/public/dispatch.cgi, which would explain the slowness. This tends to be a default deployment on many hosts, since it doesn't require extra configuration on their part, but it doesn't give good performance)
If your host supports "Fast CGI", rails supports that too. Fast CGI will open a CGI session, and keep it open for multiple pages, so you get much better performance, but it's not nearly as good as Mongrel or mod_rails.
Secondly, is it in 'production' or 'development' mode? The easy way to tell is to go to a page in your app that gives an error. If it shows you a stack trace, it's in development mode, which is slower than production mode. Mongrel and mod_rails have startup options to determine whether to run the app in production or development mode.
Finally, if your database is slow for whatever reason, that will be a big bottleneck as well. If you do have a good deployment (Mongrel/mod_rails/etc.) in production mode, try looking into that.
Do you have a lot of data in your DB? I would double check that you have indexed all the appropriate columns- because this can make a huge difference. On your local dev system, you probably have a lot more memory than on your 500 mb slice, which would result in the DB running a lot slower if you have big, un indexed tables. You can also run the slow queries logger in MySql to pinpoint columns without indexes.
Other than that, yes- passenger will need to spool up a process for you if you have not been using the site recently. If this is the case, you should see a significant speed increase on second, and especially third and later page loads.
You might want to run a local virtual machine with 500 MB. Are you doing a lot of client-server interaction? Delays over the WAN are significant
You might want to check out RPM (there's a free "lite" version too) and/or New Relic's Tune Up.
Your CPU time is guaranteed by Slicehost using the Xen virtualization system, so it's not that. Don't have the other answers for you, sorry! Might try 'top' on a console while you're trying to access the page.
If you are using FireFox and doing localhost testing (or maybe even on LAN) you may want to try editing the network.dns.disableIPv6 setting.
Type about:config in the address bar and filter for network.dns.disableIPv6 and double-click to set to true.
This bug has been reported mainly from Vista OS's, but some others as well.
You could try running 'top' when you SSH in to see which process is heavy. If you also have problems logging you, perhaps you may try getting Statistics in the Slicehost manager.
If you discover it is MySQL's fault, consider decreasing the number of servers it can spawn.
512 seems decent for Rails application, you might have to check if you misconfigured too.

Resources