Unicorn cannot allocate memory - ruby-on-rails

i need your help!
I have deployed my Rails app in Ubuntu 12.04, using Nginx, MySQL, Solr and Unicorn.
Every mentioned service is started, instead of unicorn, which says following:
I, [2013-02-11T16:10:20.187989 #27547] INFO -- : Refreshing Gem list
I, [2013-02-11T16:10:52.159198 #27547] INFO -- : unlinking existing
socket=/var/www/staging/shared/unicorn.sock I,
[2013-02-11T16:10:52.159488 #27547] INFO -- : listening on
addr=/var/www/staging/shared/unicorn.sock fd=12 E,
[2013-02-11T16:10:52.161513 #27547] ERROR -- : Cannot allocate memory
- fork(2) (Errno::ENOMEM) /var/www/staging/shared/gems/ruby/1.9.1/gems/unicorn-4.5.0/lib/unicorn/http_server.rb:496:in
fork'
/var/www/staging/shared/gems/ruby/1.9.1/gems/unicorn-4.5.0/lib/unicorn/http_server.rb:496:in
spawn_missing_workers'
/var/www/staging/shared/gems/ruby/1.9.1/gems/unicorn-4.5.0/lib/unicorn/http_server.rb:142:in
start'
/var/www/staging/shared/gems/ruby/1.9.1/gems/unicorn-4.5.0/bin/unicorn_rails:209:in
'
/var/www/staging/shared/gems/ruby/1.9.1/bin/unicorn_rails:23:in load'
/var/www/staging/shared/gems/ruby/1.9.1/bin/unicorn_rails:23:in
'
VDS has 1.5Gb of RAM and it's enough for unicorn:
cat /proc/meminfo
MemTotal: 1585152 kB
MemFree: 989580 kB
Cached: 425296 kB
Active: 348504 kB
Inactive: 175356 kB
Active(anon): 98488 kB
Inactive(anon): 76 kB
Active(file): 250016 kB
Inactive(file): 175280 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 204800 kB
SwapFree: 204800 kB
Dirty: 12 kB
Writeback: 0 kB
AnonPages: 98564 kB
Shmem: 3604 kB
Slab: 71680 kB
SReclaimable: 66144 kB
SUnreclaim: 5536 kB
I have unicorn_rails v4.5.0
Unicorn starts by following command:
bundle exec unicorn_rails -c /var/www/staging/current/config/unicorn.rb -E production -D
What i'm doing wrong here?
Hmm, i remembered, that previously i had following strange error:
failed: "rvm_path=/usr/local/rvm /usr/local/rvm/bin/rvm-shell 'ruby-1.9.3-p327' -c 'cd /var/www/staging/current && bundle exec unicorn_rails -c /var/www/staging/current/config/unicorn.rb -E production -D'"
Maybe it is somehow related to memory problems…

It seems the error is happening when forking new processes. You may need to reduce the workers in your config/unicorn.rb file. Each worker is a process and each process loads the application environment into the RAM.

Related

ruby on rails app stuck on futex 'kill QUIT PID' but passenger gives no trace

Part of passenger-status(nginx/1.14.0 Phusion_Passenger/6.0.1) output shows that the two processes are shutting down but cannot quit.
* PID: 10351 Sessions: 1 Processed: 279777 Uptime: 6d 1h 41m 37s
CPU: 3% Memory : 625M Last used: 2d 3
Shutting down...
* PID: 10370 Sessions: 1 Processed: 290718 Uptime: 6d 1h 41m 37s
CPU: 3% Memory : 778M Last used: 6h 5
Shutting down...
The strace output tells me the ruby process is stuck on futex call.
$ strace -p 10351
strace: Process 10351 attached
futex(0x7fd7cbf02210, FUTEX_WAIT_PRIVATE, 0, NULL
kill -QUIT 10351 also gives me no trace info.
The relevant output of ps -efL to process 10351 shows another thread id 10353.
ubuntu 10351 1 10351 0 2 Feb02 ? 00:00:02 Passenger AppPreloader: /var/www/app/current (forking...)
ubuntu 10351 1 10353 0 2 Feb02 ? 00:00:00 Passenger AppPreloader: /var/www/app/current (forking...)
and strace -p 10353 outputs:
strace: Process 10353 attached
restart_syscall(<... resuming interrupted poll ...>
Any idea how to get the ruby trace info to debug the issue?

Phusion Passenger process stuck on (forking...) Rails

Today I updated to the newest updated package for Nginx and Passenger. After the update, my app now has a (forking...) process that wasn't there before and doesn't seem to go away. Yet it is taking up memory and sudo /usr/sbin/passenger-memory-stats reports the following.
--------- Nginx processes ----------
PID PPID VMSize Private Name
------------------------------------
1338 1 186.0 MB 0.8 MB nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
1345 1338 186.3 MB 1.1 MB nginx: worker process
### Processes: 2
### Total private dirty RSS: 1.91 MB
---- Passenger processes -----
PID VMSize Private Name
------------------------------
1312 378.8 MB 2.1 MB Passenger watchdog
1320 663.8 MB 4.2 MB Passenger core
1768 211.5 MB 29.0 MB Passenger AppPreloader: /home/ubuntu/my-app
1987 344.1 MB 52.2 MB Passenger AppPreloader: /home/ubuntu/my-app (forking...)
2008 344.2 MB 41.1 MB Passenger AppPreloader: /home/ubuntu/my-app (forking...)
### Processes: 5
### Total private dirty RSS: 128.62 MB
I have the passenger_max_pool_size 2. sudo /usr/sbin/passenger-status reports that two are currently open. The server is receiving no hits at the moment besides me using the site.
Version : 5.3.0
Date : 2018-05-14 00:41:05 +0000
Instance: ql2TTnkw (nginx/1.14.0 Phusion_Passenger/5.3.0)
----------- General information -----------
Max pool size : 2
App groups : 1
Processes : 2
Requests in top-level queue : 0
----------- Application groups -----------
/home/ubuntu/my-app (production):
App root: /home/ubuntu/my-app
Requests in queue: 0
* PID: 1987 Sessions: 0 Processed: 1 Uptime: 3m 36s
CPU: 0% Memory : 52M Last used: 3m 36s ago
* PID: 2008 Sessions: 0 Processed: 1 Uptime: 3m 35s
CPU: 0% Memory : 41M Last used: 3m 35s ago
Passenger never did this before the update and keeps the (forking...) always there now and it seems to have two apps running when it only needs one. I have searched their documents and know when it uses forking and when it doesn't and when it kills app automatically after a certain amount of time. Did they update something with the newest update that I missed in the docs? It seems that 2008 344.2 MB 89.4 MB Passenger AppPreloader: /home/ubuntu/my-app (forking...) always shows now and sometimes even has two of those when before the update I always had the process show without the (forking...).
This is normal for Passenger >= 5.3.
Source: I'm a dev at Phusion who works on Passenger.

Gitlab-cl server and memory usage

we are running gitlab-cl 10.0.1 installed from repository on centos 6.9
We have a physical server with 65GB of RAM.
We had slow performances on the web interface, so looking at the memory we saw that the server is swapping a bit and all the memory is used.
There is no active process using it and free -m confirms it is cached :
total used free shared buffers cached
Mem: 64412 64179 232 140 1 176
-/+ buffers/cache: 64001 410
Swap: 15999 2679 13320
The strange thing is that all memory is allocated on DirectMap2M
cat /proc/meminfo
MemTotal: 65957916 kB
MemFree: 242364 kB
Buffers: 1132 kB
Cached: 193548 kB
SwapCached: 853032 kB
Active: 6302692 kB
Inactive: 1729836 kB
Active(anon): 6276560 kB
Inactive(anon): 1704824 kB
Active(file): 26132 kB
Inactive(file): 25012 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 16383996 kB
SwapFree: 13580524 kB
Dirty: 1576 kB
Writeback: 0 kB
AnonPages: 7595904 kB
Mapped: 162376 kB
Shmem: 144312 kB
Slab: 57184100 kB
SReclaimable: 35132 kB
SUnreclaim: 57148968 kB
KernelStack: 12912 kB
PageTables: 59144 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 49362952 kB
Committed_AS: 18168608 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 395428 kB
VmallocChunk: 34323721400 kB
HardwareCorrupted: 0 kB
AnonHugePages: 3260416 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 7652 kB
DirectMap2M: 67088384 kB
Do you know why this is happening?
Is that normal with gitlab?
I read about few commands to remove cache from the memory :
# sync; echo 1 > /proc/sys/vm/drop_caches
# sync; echo 2 > /proc/sys/vm/drop_caches
# sync; echo 3 > /proc/sys/vm/drop_caches
Are they safe to run on a production machine running gitlab?
Thanks a lot
Can't say exactly why all memory is used up but the fact is that it is. At first I expected the standard https://www.linuxatemyram.com/ but I can see that in your case you are actually using 64001 kb of memory.
The commands listed are pointless, what they do is just delete disk blocks that are cached in memory causing an io hit next time same block is needed.
To find out what's going on you need to see what process that uses up all the memory. It is several ways to get that info
ps -e -o pid,vsz,comm= | sort -n -k 2
or to get the arguments also
ps -e -o pid,vsz,command= | sort -n -k 2|cut -b1-$COLUMNS
you can start "top" and hit uppercase "M" to sort by memory user.

memory leak issue in rails and phusion passenger

The rails site was running so slow that I had to reboot the os, but after only 1 hour after rebooting ubuntu, the system was incredible slow again, so I checked the passenger memory statistics:
------ Passenger processes ------
PID VMSize Private Name
---------------------------------
1076 215.8 MB 0.3 MB PassengerWatchdog
1085 2022.3 MB 4.4 MB PassengerHelperAgent
1089 109.4 MB 6.4 MB Passenger spawn server
1093 159.2 MB 0.8 MB PassengerLoggingAgent
1883 398.1 MB 110.1 MB Rack: /home/guarddog/public_html/guarddog.com/current
1906 1174.6 MB 885.9 MB Rack: /home/guarddog/public_html/guarddog.com/current
3648 370.1 MB 131.9 MB Rack: /home/guarddog/public_html/guarddog.com/current
14830 370.6 MB 83.0 MB Rack: /home/guarddog/public_html/guarddog.com/current
15124 401.2 MB 113.9 MB Rack: /home/guarddog/public_html/guarddog.com/current
15239 413.5 MB 127.7 MB Rack: /home/guarddog/public_html/guarddog.com/current
15277 400.5 MB 113.6 MB Rack: /home/guarddog/public_html/guarddog.com/current
15285 357.1 MB 70.1 MB Rack: /home/guarddog/public_html/guarddog.com/current
### Processes: 12
### Total private dirty RSS: 1648.10 MB
It boggles my mind how that one rack process is using 885.9 MB of private dirty RSS memory after one hour of rebooting the OS when usage was down to 100 mb total. Now one hour later it's at 1648.10 mb. The site is so slow the page won't even load.
I assume it's a memory leak so I added this line of code throughout the application:
puts "Object count: #{ObjectSpace.count_objects}"
But I do not know how to interpret the data it gives me:
Object count: {:TOTAL=>2379635, :FREE=>318247, :T_OBJECT=>35074, :T_CLASS=>6707, :T_MODULE=>1760, :T_FLOAT=>174, :T_STRING=>1777046, :T_REGEXP=>2821, :T_ARRAY=>75160, :T_HASH=>64227, :T_STRUCT=>774, :T_BIGNUM=>7, :T_FILE=>7, :T_DATA=>55075, :T_MATCH=>10, :T_COMPLEX=>1, :T_RATIONAL=>63, :T_NODE=>37652, :T_ICLASS=>4830}
Note I only running that ObjectSpace.count_objects on my local machine since my ubuntu server is so slow it cannot even load the page.
Here's some other OS statistics:
$ cat /proc/meminfo
MemTotal: 6113156 kB
MemFree: 3027204 kB
Buffers: 103540 kB
Cached: 261544 kB
SwapCached: 0 kB
Active: 2641168 kB
Inactive: 248316 kB
Active(anon): 2524652 kB
Inactive(anon): 328 kB
Active(file): 116516 kB
Inactive(file): 247988 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 6287356 kB
SwapFree: 6287356 kB
Dirty: 36 kB
Writeback: 0 kB
AnonPages: 2524444 kB
Mapped: 30108 kB
Shmem: 568 kB
Slab: 77268 kB
SReclaimable: 48528 kB
SUnreclaim: 28740 kB
KernelStack: 4648 kB
PageTables: 43044 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 9343932 kB
Committed_AS: 5179468 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 293056 kB
VmallocChunk: 34359442172 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 46848 kB
DirectMap2M: 5195776 kB
df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/roadrunner-root 134821120 22277596 105695012 18% /
udev 3047064 4 3047060 1% /dev
tmpfs 1222632 252 1222380 1% /run
none 5120 0 5120 0% /run/lock
none 3056576 88 3056488 1% /run/shm
none 102400 0 102400 0% /run/user
/dev/sda1 233191 29079 191671 14% /boot
On a side note, if I run "kill -9 1906" to kill that one rack process consuming so much memory, would that help?
First of all, hot fix the immediate production problem, implement a watchdog - http://dev.mensfeld.pl/2012/08/simple-rubyrails-passenger-memory-consumption-limit-monitoring/ and then hunt for the leak, or bloat (https://blog.engineyard.com/2009/thats-not-a-memory-leak-its-bloat)
That process you showed looks like a regular worker, try killing the offending process under controlled conditions, see what happens, probably nothing bad.
See if you can correlate this happening with a certain (often long running) controller action, or apache restarts/reloads (have this problem on regular basis, 1 in 20 processes goes bonkers after restart).
Extend rails logs so they contain PIDs (https://gist.github.com/krutten/1091611 for example) and make a simple script that dumps memory use into a file every minute or so (make sure you don't fill your disk) - this will enable you to know exactly when a process started bloating and then trace in the logs what it was doing before/as this happened

Phusion passenger has crossed maximum instances limit

I'm using the following for my rails app.
ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux]
Rails 3.0.5
Phusion Passenger version 3.0.5
The app sits in a 4GB RAM linux box. I recently upgraded my rails app from 3.0.1 to 3.0.5 for the critical security fix they released last week.
I've been noticing a strange thing. I'm having the following passenger settings in my /etc/apache2/apache2.conf
PassengerMaxPoolSize 10
PassengerMaxInstancesPerApp 5
But there are 18 rack instances spawned by passenger. Its just one app in the server and there is nothing else. App has become slow in response times. I suspect the extra rack instances (coming out of nowhere) is occupying extra memory.
here is my free -m output
total used free shared buffers cached
Mem: 4011 3992 19 0 1 22
-/+ buffers/cache: 3968 43
Swap: 8191 5780 2411
Here is my passenger-status command output and passenger-memory-stats output.
passenger-status:
----------- General information -----------
max = 10
count = 5
active = 1
inactive = 4
Waiting on global queue: 0
----------- Application groups -----------
/home/anand/public_html/railsapp/current:
App root: /home/anand/public_html/railsapp/current
* PID: 6704 Sessions: 0 Processed: 72 Uptime: 9m 58s
* PID: 6696 Sessions: 0 Processed: 99 Uptime: 9m 58s
* PID: 6712 Sessions: 0 Processed: 69 Uptime: 9m 57s
* PID: 6688 Sessions: 0 Processed: 52 Uptime: 9m 58s
* PID: 6677 Sessions: 1 Processed: 83 Uptime: 11m 28s
passenger-memory-stats:
--------- Apache processes ---------
PID PPID VMSize Private Name
------------------------------------
6470 1 95.5 MB 0.3 MB /usr/sbin/apache2 -k start
6471 6470 94.7 MB 0.5 MB /usr/sbin/apache2 -k start
6488 6470 378.4 MB 4.6 MB /usr/sbin/apache2 -k start
6489 6470 378.0 MB 3.8 MB /usr/sbin/apache2 -k start
6774 6470 377.4 MB 3.0 MB /usr/sbin/apache2 -k start
### Processes: 5
### Total private dirty RSS: 12.20 MB
-------- Nginx processes --------
### Processes: 0
### Total private dirty RSS: 0.00 MB
------ Passenger processes ------
PID VMSize Private Name
---------------------------------
6472 87.1 MB 0.2 MB PassengerWatchdog
6475 100.9 MB 3.2 MB PassengerHelperAgent
6477 39.4 MB 4.8 MB Passenger spawn server
6482 70.7 MB 0.6 MB PassengerLoggingAgent
6677 289.1 MB 114.3 MB Rack: /home/anand/public_html/railsapp/current
6684 287.3 MB 17.2 MB Rack: /home/anand/public_html/railsapp/current
6688 295.6 MB 82.4 MB Rack: /home/anand/public_html/railsapp/current
6696 299.2 MB 88.9 MB Rack: /home/anand/public_html/railsapp/current
6704 299.0 MB 87.3 MB Rack: /home/anand/public_html/railsapp/current
6712 312.6 MB 113.3 MB Rack: /home/anand/public_html/railsapp/current
23808 1174.7 MB 190.9 MB Rack: /home/anand/public_html/railsapp/current
26271 1767.0 MB 690.0 MB Rack: /home/anand/public_html/railsapp/current
28888 1584.7 MB 177.8 MB Rack: /home/anand/public_html/railsapp/current
32403 1638.5 MB 230.3 MB Rack: /home/anand/public_html/railsapp/current
32427 1573.6 MB 253.4 MB Rack: /home/anand/public_html/railsapp/current
32443 1576.0 MB 234.7 MB Rack: /home/anand/public_html/railsapp/current
### Processes: 16
### Total private dirty RSS: 2289.34 MB
What is going wrong here? Is Rails 3.0.5 starting up extra extra rack apps. Please help.

Resources