Why are ruby processes at 100% CPU on passenger - ruby-on-rails

I have a rails app (2.3.5) running on a VPS with 4 cores # 2 GHz and 4GB memory. I am running nginx (0.7.61) and phusion passenger(2.2.14) on Ruby Enterprise (1.8.7-2010.01) with the max pool size set at 30. My problem is that it seems as if every ruby process that is executing a rails request runs at near 100% cpu. If I run TOP they drop off every time the display refreshes so they are not getting hung, but they are still running at 100%.
Is there any way I can bring this down? Or at least figure out what portion of code is spiking the CPU? Is this a normal behavior?
Here is the TOP output:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2427 psadmin 25 0 91904 76m 2696 R 100 1.9 739:05.96 Rails: /var/www/apps/main_rails_app/current
3457 psadmin 25 0 98180 82m 2532 R 100 2.0 711:21.91 Rails: /var/www/apps/main_rails_app/current
2415 psadmin 25 0 93952 77m 2708 R 99 1.9 727:49.31 Rails: /var/www/apps/main_rails_app/current
3455 psadmin 25 0 99204 83m 2528 R 69 2.0 726:04.70 Rails: /var/www/apps/main_rails_app/current
2791 psadmin 16 0 98044 81m 2492 S 31 2.0 0:10.16 Rails: /var/www/apps/main_rails_app/current
8034 psadmin 15 0 8160 3656 1772 S 1 0.1 0:35.39 nginx: worker process
8035 psadmin 15 0 8324 3696 1732 S 0 0.1 0:31.34 nginx: worker process
2588 psadmin 15 0 197m 183m 2712 S 0 4.5 1:02.16 Rails: /var/www/apps/main_rails_app/current
Thanks!
Edit: Tried strace with follow forks as mentioned below. This is the output that is dumped over and over:
sudo strace -f -p 3455
clock_gettime(CLOCK_MONOTONIC, {394577, 508326476}) = 0
select(0, [], [], [], {0, 0}) = 0 (Timeout)
--- SIGVTALRM (Virtual timer expired) # 0 (0) ---
sigreturn()

check your logs for suspicious behavior. In general rails does suck a bunch of cpu though...you could also try pointing strace at the offending pids.

Related

lltd process taking lot of memory and CPU time

I am running rails app on nginx+unicorn on aws 8GB server, there is one process lltd. I have no clue what this process is about. I am not sure should I kill it or not as it may be a system process.
This is my top command log
22824 ubuntu 20 0 4 S 138.2 4.4 241156:32 lltd
23283 ubuntu 20 0 4104 R 48.5 4.2 1:26.94 ruby
31631 mysql 20 0 0 S 8.6 4.1 264:54.60 mysqld
23293 ubuntu 20 0 4288 S 1.3 6.5 2:40.30 ruby
36 root 20 0 0 S 0.7 0.0 103:33.00 kswapd0
can anyone help with me with this?
I tried google this but there are not results on this.

Rails app Sidekiq high memory usage

I have a Rails 4 app deployed on Digital Ocean using Ubuntu 14.04.5 LTS. The app seems to run fine but the system runs at 95% memory all the time. I even upgraded the droplet to double the RAM and it's still at 95%.
Here is my top output:
top - 11:03:54 up 8:37, 1 user, load average: 0.00, 0.03, 0.05
Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.2 us, 0.2 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 2049956 total, 1980616 used, 69340 free, 8708 buffers
KiB Swap: 1048572 total, 1036928 used, 11644 free. 47864 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7510 ubuntu 20 0 780436 274428 2540 S 0.0 13.4 0:27.66 ruby
1313 root 20 0 1921472 250948 2700 S 0.0 12.2 1:42.70 bundle
1315 root 20 0 1876992 246204 2664 S 0.0 12.0 1:44.10 bundle
1359 root 20 0 1928636 236168 2692 S 0.0 11.5 1:42.58 bundle
6408 ubuntu 20 0 781764 175368 2244 S 0.3 8.6 1:10.81 ruby
8681 ubuntu 20 0 984140 156708 1884 S 0.3 7.6 1:37.95 ruby
8810 ubuntu 20 0 646824 117356 2548 S 0.0 5.7 0:11.07 ruby
8821 ubuntu 20 0 646920 112728 2532 S 0.0 5.5 0:11.48 ruby
8797 ubuntu 20 0 646728 82372 2960 S 0.0 4.0 0:14.33 ruby
1932 ubuntu 20 0 332292 56948 1552 S 0.0 2.8 0:04.88 ruby
I know there are tons of blog posts etc. on Rail app memory optimization. The 3 bundle processes are what confuses me. My app (actually 2 apps - one production and one staging) uses Redis / Sidekiq which are the bundle processes. So my question(s) are:
1) Is this 'normal'?
2) If not is there a way to start troubleshooting this?
UPDATE
Here is the top output:
ubuntu#rails-01:~$ ps aux --sort=-%mem
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1291 8.8 9.4 846856 194624 ? Sl 18:04 0:23 sidekiq 4.2.10 production [0 of 3 busy]
root 1411 8.9 9.4 846824 194532 ? Sl 18:04 0:24 sidekiq 4.2.10 production [0 of 3 busy]
root 1272 9.1 9.4 712752 193516 ? Sl 18:04 0:24 sidekiq 4.2.10 staging [0 of 1 busy]
ubuntu 2254 0.6 9.3 645792 192648 ? Sl 18:05 0:01 Passenger RubyApp: /home/ubuntu/production/current/public (production)
ubuntu 1986 0.6 9.3 645048 192064 ? Sl 18:05 0:01 Passenger RubyApp: /home/ubuntu/staging/current/public (staging)
ubuntu 1762 9.8 9.2 375520 190264 ? Sl 18:04 0:24 Passenger AppPreloader: /home/ubuntu/production/current
ubuntu 1678 9.5 9.2 374872 189588 ? Sl 18:04 0:25 Passenger AppPreloader: /home/ubuntu/staging/current
ubuntu 2082 0.2 9.1 645144 187524 ? Sl 18:05 0:00 Passenger RubyApp: /home/ubuntu/staging/current/public (staging)
ubuntu 1839 2.9 3.9 197300 79976 ? Sl 18:04 0:06 Passenger AppPreloader: /home/ubuntu/landing/current
ubuntu 1962 0.1 3.8 332292 78720 ? Sl 18:05 0:00 Passenger RubyApp: /home/ubuntu/landing/current/public (production)
ubuntu 1969 0.0 3.7 332388 76044 ? Sl 18:05 0:00 Passenger RubyApp: /home/ubuntu/landing/current/public (production)
I forgot I have 2 production workers on the server and 1 staging. I had concurrency at 5 and 2 but I then lowered that to 3 and 1. All Sidekiq is doing is some low level upload image processing and bulk record creations, updated and deletes that I don't want the user to sit around waiting for a page load on.
I am now seeing it level off at 80%. Better but still seems high. I think the next will be lots of code optimization etc. I am sure I have lots of things I can find here.
There's tons of reasons why your Ruby process might be eating too much memory. Any gem or app code can allocate any amount of memory so in general, it is impossible for SO to tell you why. Here's one possible reason:
https://github.com/rails/rails/issues/27002#issuecomment-260086170
Keep an eye on how many Sidekiq processes do you have running, what is the configured concurrency, the polling and also the number of queues you use. A high number of any of those can cause high memory usage. You can tweak those values in your sidekiq.yml and test-drive how they affect to your environment.
For more info: https://github.com/mperham/sidekiq/wiki/Advanced-Options

Find memory leaks in a Rails application

I have a web application in Ruby on Rails. We use mongrel clusters started on apcahe httpd to run the application. We have been facing an issue of huge memory consumption in the application. (RedHat,Ruby 1.8.7, Rails 2.3.5, RAM 8GB)
The thing is after we start the web server(start the mongrel clusters), the memory usage seems to be increasing. For example, if the free memory(RAM) when I started the web server was 6GB. After 2 days, the free memory becomes 3GB even at the time of no traffic in the site. If the web server is not restarted for a week, the memory seems to increase and use full 8GB RAM and cause issues of "no memory to allocate" for processes like pdf generation using "PrinceXML", mail sending using sendmail (I think these are memory ). When the web server is restarted, the free memory goes back to 6GB.
Is this a case of memory leak in the Rails application? How to check the application for memory leaks? I have found a tool for checking memory leaks bleak_house but when I install it as a gem as shown in this link, it is giving No command bleak found when I run 'bleak /tmp/bleak.5979.000.dump' to analyze.
I am using PrinceXML to generate PDF reports and sendmail for mail sending purposes. This server has also got a instance of Jasper Server running. Anyone please help.
Here is the result of the top command at the time of memory overload.
-bash-3.2$ top
top - 10:34:10 up 14 days, 7:40, 2 users, load average: 0.24, 0.40, 0.39
Tasks: 181 total, 1 running, 177 sleeping, 2 stopped, 1 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8173984k total, 8011564k used, 162420k free, 10044k buffers
Swap: 2096472k total, 152624k used, 1943848k free, 2012016k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
858 **nt*rsc 15 0 12748 1168 832 R 173.5 0.0 0:00.36 top
1 root 15 0 10356 108 76 S 0.0 0.0 0:17.10 init
2 root RT -5 0 0 0 S 0.0 0.0 0:00.10 migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.09 ksoftirqd/0
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
5 root RT -5 0 0 0 S 0.0 0.0 0:00.12 migration/1
6 root 34 19 0 0 0 S 0.0 0.0 0:00.12 ksoftirqd/1
7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1
8 root RT -5 0 0 0 S 0.0 0.0 0:00.70 migration/2
9 root 34 19 0 0 0 S 0.0 0.0 0:00.07 ksoftirqd/2
10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2
11 root RT -5 0 0 0 S 0.0 0.0 0:00.67 migration/3
12 root 34 19 0 0 0 S 0.0 0.0 0:00.11 ksoftirqd/3
13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3
14 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/0
15 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/1
16 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/2
I'd try using passenger (which automatically restarts and manages rails instances that grow too large in memory - much easier than rebooting mongrels that have strayed off sane memory constraints). Also you might have luck with ruby enterprise edition fork of 1.8.7 which backports some memory management fixes from 1.9 (like allowing the VM to shrink when it's using less memory) - that change might have worked it's way back to normal 1.8.7 though, although I am not sure. The claim with REE is that you can reduce memory consumption with 33% for rails applications.
Ruby stuff generally tend to grow over time and need rebooting, with passenger it does that automatically for you. It's worked perfectly for me so I can really recommend it.
http://www.modrails.com/
It also has good memory analytic functions
http://www.modrails.com/documentation/Users%20guide%20Apache.html#_analysis_and_system_maintenance

Memory used but i can't see process that used it (Debian)

Here is my problem:
top - 11:32:47 up 22:20, 2 users, load average: 0.03, 0.72, 1.27
Tasks: 112 total, 1 running, 110 sleeping, 1 stopped, 0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8193844k total, 7508292k used, 685552k free, 80636k buffers
Swap: 2102456k total, 15472k used, 2086984k free, 7070220k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28555 root 20 0 57424 38m 1492 S 0 0.5 0:06.38 bash
28900 root 20 0 39488 7732 3176 T 0 0.1 0:03.12 python
28553 root 20 0 72132 5052 2600 S 0 0.1 0:00.22 sshd
28859 root 20 0 70588 3424 2584 S 0 0.0 0:00.06 sshd
29404 root 20 0 70448 3320 2600 S 0 0.0 0:00.06 sshd
28863 root 20 0 42624 2188 1472 S 0 0.0 0:00.02 sftp-server
29406 root 20 0 19176 1984 1424 S 0 0.0 0:00.00 bash
2854 root 20 0 115m 1760 488 S 0 0.0 5:37.02 rsyslogd
29410 root 20 0 19064 1400 1016 R 0 0.0 0:05.14 top
3111 ntp 20 0 22484 604 460 S 0 0.0 10:26.79 ntpd
3134 proftpd 20 0 64344 452 280 S 0 0.0 6:29.16 proftpd
2892 root 20 0 49168 356 232 S 0 0.0 0:31.58 sshd
1 root 20 0 27388 284 132 S 0 0.0 0:01.38 init
3121 root 20 0 4308 248 172 S 0 0.0 0:16.48 mdadm
As you can see 7.5 GB of memory is used, but there is no process that use it.
How it can be, and how to fix this?
Thanks for answer.
www.linuxatemyram.com
It's too good of a site to ruin by copy/pasting the entire contents here.
in order to see all process you can use that command:
ps aux
and then try to sort with different filters
ps faux
Hope that helps.
If your system starts using the swap file - then you have high memory load. Depends on the file system, programs that you use - linux system may allocate all of your system memory - but that doesn't mean that they are using it.
Lots of ubuntu and debian servers that we use have free memory 32 or 64 mb but don't use swap.
I'm not Linux-gure however, so please someone to correct me if I'm wrong :)
I don't have a Linux box handy to experiment, but it looks like you can sort top's output with interactive commands, so you could bring the biggest memory users to the top. Check the man page and experiment.
Update: In the version of top I have (procps 3.2.7), you can hit "<" and ">" to change the field it's sorting by. Doesn't actually say what field it is, you have to look at how the display is changing. It's not hard once you experiment a little.
However, Arrowmaster's point (that it's probably being used for cache) is a better answer. Use "free" to see how much is being used.
I had a similar problem. I was running Raspbian on a Pi B+ with a TP-Link USB Wireless LAN stick connected. The stick caused a problem which resulted in nearly all memory being consumed on system start (around 430 of 445 MB). Just like in your case, the running processes did not consume that much memory. When I removed the stick and rebooted everything was fine, just 50 MB memory consumption.

Swapping problem for Rails app on slicehost

I have a Rails 2.3.8 app hosted and running on slicehost (256M). I am not familiar at all with the back-end, I basically followed the steps from the slicehost tutorials to install Apache. The memory usage being very high, I then changed my Apache conf file to reduce the MaxClient number to 10... but my slice is still swapping.
Here is what the memory usage I get after just a few clicks on my site:
top - 23:57:12 up 28 min, 2 users, load average: 0.43, 0.54, 0.30
Tasks: 79 total, 1 running, 78 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 97.8%id, 0.1%wa, 0.0%hi, 0.0%si, 2.0%st
Mem: 262364k total, 258656k used, 3708k free, 260k buffers
Swap: 524280k total, 262772k used, 261508k free, 6328k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4004 web-app 20 0 178m 72m 1888 S 0 28.4 0:04.38 ruby1.8
4001 web-app 20 0 172m 61m 1932 S 0 24.2 0:02.72 ruby1.8
3941 root 20 0 164m 57m 1672 S 0 22.5 0:21.44 ruby
3990 web-app 20 0 209m 21m 1696 S 0 8.4 0:18.00 ruby1.8
3950 web-app 20 0 165m 7464 1548 S 0 2.8 0:20.40 ruby1.8
3684 mysql 20 0 224m 6504 2084 S 0 2.5 0:14.34 mysqld
3938 root 20 0 53632 3048 1036 S 1 1.2 0:01.50 starling
3839 root 20 0 243m 1456 1248 S 0 0.6 0:00.34 apache2
3897 www-data 20 0 243m 1452 1072 S 0 0.6 0:00.04 apache2
3894 www-data 20 0 243m 1368 1008 S 0 0.5 0:00.04 apache2
3895 www-data 20 0 243m 1220 960 S 0 0.5 0:00.02 apache2
3888 root 20 0 46520 1204 1100 S 0 0.5 0:02.29 ruby1.8
3866 root 20 0 17648 1184 896 S 0 0.5 0:00.08 bash
3896 www-data 20 0 243m 1180 952 S 0 0.4 0:00.00 apache2
3964 www-data 20 0 243m 1164 956 S 0 0.4 0:00.02 apache2
3892 www-data 20 0 243m 1132 956 S 0 0.4 0:00.00 apache2
3948 www-data 20 0 243m 1132 956 S 0 0.4 0:00.00 apache2
3962 www-data 20 0 243m 1132 956 S 0 0.4 0:00.02 apache2
3963 www-data 20 0 243m 1132 956 S 0 0.4 0:00.00 apache2
3965 www-data 20 0 243m 1080 888 S 0 0.4 0:00.00 apache2
3887 root 20 0 89008 960 796 S 0 0.4 0:00.00 ApplicationPool
I'm not sure what to do next... I could upgrade to a larger slice but for now I have almost no traffic on this app, so I think it's more a problem with my configuration or maybe my code?
Any concrete recommendations would be welcome!
Thanks
It looks like your rails app is using all your available memory. I would recommend three things:
Upgrade the memory on your server. 256MB is not very much for a Rails app. Going to 512 may alleviate your problem. If that solves it, you then need to consider the additional cost ($18/mo) vs how much time it will take to track down performance issues.
Profile your application to figure out which requests are consuming the most memory. This is likely going to be places where you're finding a lot of records and possibly including some associated tables too. There are a couple of tools out there to help you narrow down possible trouble areas. I've used oink but there are definitely others. Once you figure out where the problems are, you can make some tweaks to try and reduce the memory usage.
Assuming you're using Passenger with Apache, you can reduce the number of concurrent requests in the Passenger config file. This might be useful for that https://serverfault.com/questions/15350/running-ruby-on-rails-app-on-apache-passenger-to-much-memory
In short, 256MB is tight for a Rails application. You did not really give any specifics on how you are running rails, but I assume you are using Apache with the Passenger module. The Passenger module can be configured on how many instances it keeps running. You have 4 ruby instances running under the web-app account. I guess those come from Passenger. In the configuration, you can limit how many instances Passenger starts. This will reduce the memory requirements.
On the other hand, when working with only 256MB, and when you are only hosting 1 rails application, it might be better to go for another setup. The setup that I used myself before was an Nginx web server, and a mongrel cluster with 2 mongrels (on 192MB, and application was only for testing purposes). Basically that means that at any one time, you can process 2 (and only 2) rails requests in parallel. The setup is maybe a bit harder than Apache+Passenger, but definitely not difficult. I think that is a more performant solution when you stick with the 256MB.

Resources