Phusion Passenger process stuck on (forking...) Rails - ruby-on-rails

Today I updated to the newest updated package for Nginx and Passenger. After the update, my app now has a (forking...) process that wasn't there before and doesn't seem to go away. Yet it is taking up memory and sudo /usr/sbin/passenger-memory-stats reports the following.
--------- Nginx processes ----------
PID PPID VMSize Private Name
------------------------------------
1338 1 186.0 MB 0.8 MB nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
1345 1338 186.3 MB 1.1 MB nginx: worker process
### Processes: 2
### Total private dirty RSS: 1.91 MB
---- Passenger processes -----
PID VMSize Private Name
------------------------------
1312 378.8 MB 2.1 MB Passenger watchdog
1320 663.8 MB 4.2 MB Passenger core
1768 211.5 MB 29.0 MB Passenger AppPreloader: /home/ubuntu/my-app
1987 344.1 MB 52.2 MB Passenger AppPreloader: /home/ubuntu/my-app (forking...)
2008 344.2 MB 41.1 MB Passenger AppPreloader: /home/ubuntu/my-app (forking...)
### Processes: 5
### Total private dirty RSS: 128.62 MB
I have the passenger_max_pool_size 2. sudo /usr/sbin/passenger-status reports that two are currently open. The server is receiving no hits at the moment besides me using the site.
Version : 5.3.0
Date : 2018-05-14 00:41:05 +0000
Instance: ql2TTnkw (nginx/1.14.0 Phusion_Passenger/5.3.0)
----------- General information -----------
Max pool size : 2
App groups : 1
Processes : 2
Requests in top-level queue : 0
----------- Application groups -----------
/home/ubuntu/my-app (production):
App root: /home/ubuntu/my-app
Requests in queue: 0
* PID: 1987 Sessions: 0 Processed: 1 Uptime: 3m 36s
CPU: 0% Memory : 52M Last used: 3m 36s ago
* PID: 2008 Sessions: 0 Processed: 1 Uptime: 3m 35s
CPU: 0% Memory : 41M Last used: 3m 35s ago
Passenger never did this before the update and keeps the (forking...) always there now and it seems to have two apps running when it only needs one. I have searched their documents and know when it uses forking and when it doesn't and when it kills app automatically after a certain amount of time. Did they update something with the newest update that I missed in the docs? It seems that 2008 344.2 MB 89.4 MB Passenger AppPreloader: /home/ubuntu/my-app (forking...) always shows now and sometimes even has two of those when before the update I always had the process show without the (forking...).

This is normal for Passenger >= 5.3.
Source: I'm a dev at Phusion who works on Passenger.

Related

Issue with Rails + Passenger + Nginx

I have cloned an instance by creating AMI and then create a new from instance from that AMI on AWS EC2.
Everything is working find excepting the Passenger server on new instance.
Here is the output of command $ sudo /usr/sbin/passenger-memory-stats
Version: 6.0.14
Date : 2022-07-27 05:27:54 +0000
------------- Apache processes -------------
*** WARNING: The Apache executable cannot be found.
Please set the APXS2 environment variable to your 'apxs2' executable's filename, or set the HTTPD environment variable to your 'httpd' or 'apache2' executable's filename.
---------- Nginx processes -----------
PID PPID VMSize Private Name
--------------------------------------
29997 1 221.3 MB 0.6 MB nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
30000 29997 223.5 MB 0.7 MB nginx: worker process
30001 29997 223.5 MB 0.7 MB nginx: worker process
### Processes: 3
### Total private dirty RSS: 2.10 MB
----- Passenger processes ------
PID VMSize Private Name
--------------------------------
29971 390.4 MB 2.8 MB Passenger watchdog
29981 1010.9 MB 3.8 MB Passenger core
### Processes: 2
### Total private dirty RSS: 6.52 MB
Inside the Passenger processes section, Passenger Server is missing that is the problem. I am unable to start passenger app caused by It.

The website is under heavy load + ROR

We are running a website with ROR on CentOS 6 with 2 web server and 1 database server. Some times it shows message "The website is under heavy load"... Can some plese help you what to check here.
We are using Passenger 4.0.21 with Ruby 1.8.7 and Apache 2.2.15. Web server is running with the default settings.
Below is some output of passenger-status:
# passenger-status
Version : 4.0.21
Date : Thu Dec 12 02:02:44 -0500 2013
Instance: 20126
----------- General information -----------
Max pool size : 6
Processes : 6
Requests in top-level queue : 0
----------- Application groups -----------
/home/web/html#default:
App root: /home/web/html
Requests in queue: 100
* PID: 20290 Sessions: 1 Processed: 53 Uptime: 24h 3m 5s
CPU: 0% Memory : 634M Last used: 23h 16m 8
* PID: 22657 Sessions: 1 Processed: 37 Uptime: 23h 15m 55s
CPU: 0% Memory : 609M Last used: 22h 44m
* PID: 29147 Sessions: 1 Processed: 146 Uptime: 20h 47m 48s
CPU: 0% Memory : 976M Last used: 18h 20m
* PID: 22216 Sessions: 1 Processed: 26 Uptime: 10h 3m 19s
CPU: 0% Memory : 538M Last used: 9h 44m 4
* PID: 23306 Sessions: 1 Processed: 75 Uptime: 9h 43m 22s
CPU: 0% Memory : 483M Last used: 8h 44m 4
* PID: 25626 Sessions: 1 Processed: 115 Uptime: 8h 46m 42s
CPU: 0% Memory : 540M Last used: 7h 59m 5
You have too many requests in queue. Since version 4.0.15 there is a limit which is 100 by default. Here is a short excerpt from http://blog.phusion.nl/2013/09/06/phusion-passenger-4-0-16-released/ which says:
Phusion Passenger now displays an error message to clients if too many
requests are queued up, instead of letting them wait. This much
improves quality of service. By default, "too many" is 100. You may
customize this with PassengerMaxRequestQueueSize (Apache) or
passenger_max_request_queue_size (Nginx).
Have a look at the user guide about this: http://www.modrails.com/documentation/Users%20guide%20Apache.html#PassengerMaxRequestQueueSize
You could try increasing it or setting it to 0in order to disable it.
EDIT
You should also check your logs to see whether there are requests which take too long. Maybe you have some processes in your code that take too long. I prefer using NewRelic for monitoring those things.

phusion passenger processes dying and new ones starting up mysteriously

as you can see, passenger processes are dying and new ones booting up, even though we're not explicitly restarting passenger ourselves. we can't pinpoint what's causing this. what are some common places we should look to find out what's triggering these restarts?
the passenger-status commands were issued about 30 min apart. passenger_pool_idle_time is set to 0 in our conf file, which you can see here: https://gist.github.com/panabee/8ddf95a72d6a07e29c7f
we're on passenger 4.0.5, rails 3.2.12, and nginx 1.4.1.
[root#mongo ~]# passenger-status
----------- General information -----------
Max pool size : 20
Processes : 3
Requests in top-level queue : 0
----------- Application groups -----------
/home/p/p#default:
App root: /home/p/p
Requests in queue: 0
* PID: 17171 Sessions: 0 Processed: 536 Uptime: 27m 56s
CPU: 0% Memory : 62M Last used: 20s ago
* PID: 18087 Sessions: 0 Processed: 363 Uptime: 17m 31s
CPU: 0% Memory : 36M Last used: 39s ago
* PID: 19382 Sessions: 0 Processed: 51 Uptime: 2m 55s
CPU: 0% Memory : 34M Last used: 5s ago
[root#mongo ~]# passenger-status
----------- General information -----------
Max pool size : 20
Processes : 2
Requests in top-level queue : 0
----------- Application groups -----------
/home/p/p#default:
App root: /home/p/p
Requests in queue: 0
* PID: 25266 Sessions: 0 Processed: 73 Uptime: 2m 56s
CPU: 0% Memory : 32M Last used: 34s ago
* PID: 25462 Sessions: 1 Processed: 18 Uptime: 51s
CPU: 0% Memory : 28M Last used: 0s ago
[root#mongo ~]#
Look in the web server error log. If the application dies you will probably see the reason in that log file.
this is a bug in 4.0.5. 4.0.6 patches things. in the meantime, set the value to a very large number.

Phusion passenger has crossed maximum instances limit

I'm using the following for my rails app.
ruby 1.9.2p180 (2011-02-18 revision 30909) [x86_64-linux]
Rails 3.0.5
Phusion Passenger version 3.0.5
The app sits in a 4GB RAM linux box. I recently upgraded my rails app from 3.0.1 to 3.0.5 for the critical security fix they released last week.
I've been noticing a strange thing. I'm having the following passenger settings in my /etc/apache2/apache2.conf
PassengerMaxPoolSize 10
PassengerMaxInstancesPerApp 5
But there are 18 rack instances spawned by passenger. Its just one app in the server and there is nothing else. App has become slow in response times. I suspect the extra rack instances (coming out of nowhere) is occupying extra memory.
here is my free -m output
total used free shared buffers cached
Mem: 4011 3992 19 0 1 22
-/+ buffers/cache: 3968 43
Swap: 8191 5780 2411
Here is my passenger-status command output and passenger-memory-stats output.
passenger-status:
----------- General information -----------
max = 10
count = 5
active = 1
inactive = 4
Waiting on global queue: 0
----------- Application groups -----------
/home/anand/public_html/railsapp/current:
App root: /home/anand/public_html/railsapp/current
* PID: 6704 Sessions: 0 Processed: 72 Uptime: 9m 58s
* PID: 6696 Sessions: 0 Processed: 99 Uptime: 9m 58s
* PID: 6712 Sessions: 0 Processed: 69 Uptime: 9m 57s
* PID: 6688 Sessions: 0 Processed: 52 Uptime: 9m 58s
* PID: 6677 Sessions: 1 Processed: 83 Uptime: 11m 28s
passenger-memory-stats:
--------- Apache processes ---------
PID PPID VMSize Private Name
------------------------------------
6470 1 95.5 MB 0.3 MB /usr/sbin/apache2 -k start
6471 6470 94.7 MB 0.5 MB /usr/sbin/apache2 -k start
6488 6470 378.4 MB 4.6 MB /usr/sbin/apache2 -k start
6489 6470 378.0 MB 3.8 MB /usr/sbin/apache2 -k start
6774 6470 377.4 MB 3.0 MB /usr/sbin/apache2 -k start
### Processes: 5
### Total private dirty RSS: 12.20 MB
-------- Nginx processes --------
### Processes: 0
### Total private dirty RSS: 0.00 MB
------ Passenger processes ------
PID VMSize Private Name
---------------------------------
6472 87.1 MB 0.2 MB PassengerWatchdog
6475 100.9 MB 3.2 MB PassengerHelperAgent
6477 39.4 MB 4.8 MB Passenger spawn server
6482 70.7 MB 0.6 MB PassengerLoggingAgent
6677 289.1 MB 114.3 MB Rack: /home/anand/public_html/railsapp/current
6684 287.3 MB 17.2 MB Rack: /home/anand/public_html/railsapp/current
6688 295.6 MB 82.4 MB Rack: /home/anand/public_html/railsapp/current
6696 299.2 MB 88.9 MB Rack: /home/anand/public_html/railsapp/current
6704 299.0 MB 87.3 MB Rack: /home/anand/public_html/railsapp/current
6712 312.6 MB 113.3 MB Rack: /home/anand/public_html/railsapp/current
23808 1174.7 MB 190.9 MB Rack: /home/anand/public_html/railsapp/current
26271 1767.0 MB 690.0 MB Rack: /home/anand/public_html/railsapp/current
28888 1584.7 MB 177.8 MB Rack: /home/anand/public_html/railsapp/current
32403 1638.5 MB 230.3 MB Rack: /home/anand/public_html/railsapp/current
32427 1573.6 MB 253.4 MB Rack: /home/anand/public_html/railsapp/current
32443 1576.0 MB 234.7 MB Rack: /home/anand/public_html/railsapp/current
### Processes: 16
### Total private dirty RSS: 2289.34 MB
What is going wrong here? Is Rails 3.0.5 starting up extra extra rack apps. Please help.

Passenger Spawning a lot of Rack Applications

output of passenger-memory-stats
----- Passenger processes -----
PID VMSize Private Name
-------------------------------
28572 207.4 MB ? Rack: /home/myapp/application
28580 207.0 MB ? Rack: /home/myapp/application
28588 206.0 MB ? Rack: /home/myapp/application
28648 206.5 MB ? Rack: /home/myapp/application
29005 23.0 MB ? PassengerWatchdog
29008 100.5 MB ? PassengerHelperAgent
29010 43.1 MB ? Passenger spawn server
29013 70.8 MB ? PassengerLoggingAgent
29053 202.0 MB ? Passenger ApplicationSpawner: /home/myapp/application
29105 202.3 MB ? Rack: /home/myapp/application
29114 202.3 MB ? Rack: /home/myapp/application
29121 202.3 MB ? Rack: /home/myapp/application
29130 202.3 MB ? Rack: /home/myapp/application
29138 202.3 MB ? Rack: /home/myapp/application
That looks like a lot of spawned processes... this is a app currently in development with no one (that I know of) hitting it...
the output of passenger-status
App root: /home/myapp/application
* PID: 29105 Sessions: 1 Processed: 0 Uptime: 15m 11s
* PID: 29114 Sessions: 1 Processed: 0 Uptime: 14m 0s
* PID: 29121 Sessions: 1 Processed: 0 Uptime: 14m 0s
* PID: 29130 Sessions: 1 Processed: 0 Uptime: 14m 0s
* PID: 29138 Sessions: 1 Processed: 0 Uptime: 14m 0s
First, is this normal?
Second, possible causes?
For anyone having this issue of Rails hanging... If you are running on a limited memory VPS, check and make sure you tune your max_pool so that you don't spawn too many instances of the app for your system to handle. The default is 6 which is apparently too many for memory strapped VPS's.
Docs about max pool setting:
http://www.modrails.com/documentation/Users%20guide%20Nginx.html#PassengerMaxPoolSize
It may be that some process survive from earlier versions of your app. Our app's Rack processes each point to a specific release of our app.
95171 2491.8 MB 4.8 MB Rack: /Deploy/theapp/releases/20120530013237
And there were multiple processes pointing to many different releases. Which leads me to conclude these are left over when the app is restarted.
I thought may be that touching the tmp/restart.txt instead of restarting apache has this effect. So I set :use_sudo to true, and am restarting with 'run "#{try_sudo} /opt/local/apache2/bin/apachectl graceful"' instead and the only Rack processes I see are those that were just started.

Resources