Passenger processes restart even though PassengerPoolIdleTime is 0 - ruby-on-rails

I have set PassengerPoolIdleTime to 0, with the expectation that this means I can "warm" up a bunch of passenger processes on my server, and the next time I have a burst of traffic (even if it is days later), they will all be warmed up and ready to accept requests.
What I'm seeing instead is that every morning when I get up, passenger-status shows only a handful of processes and they have all only been up since midnight. The previous day I'd warmed up a bunch of processes and the last time I looked at passenger-status (before midnight) there were 50.
Here's the entire Passenger-related snippet from my httpd.conf (I'm on CentOS):
LoadModule passenger_module /usr/local/lib/ruby/gems/1.8/gems/passenger 2.2.11/ext/apache2/mod_passenger.so
PassengerRoot /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11
PassengerRuby /usr/local/bin/ruby
PassengerMaxPoolSize 60
PassengerPoolIdleTime 0
I've checked the crontabs for root and apache, to see if there might be something triggering an apache restart, but I don't see it.
Here's a snippet of passenger-status, about 11hours and 46minutes after midnight:
----------- General information -----------
max = 60
count = 3
active = 0
inactive = 3
Waiting on global queue: 0
----------- Domains -----------
/var/www/myapp/current:
PID: 20704 Sessions: 0 Processed: 360 Uptime: 11h 44m 16s
PID: 20706 Sessions: 0 Processed: 4249 Uptime: 11h 44m 9s
PID: 20708 Sessions: 0 Processed: 14189 Uptime: 11h 44m 9s
And here's what I see if I do a ps aux | grep apache:
apache 13297 0.0 0.0 546652 5312 ? Sl 14:28 0:00 /usr/sbin/httpd.worker
apache 13332 0.0 0.0 546652 5336 ? Sl 14:28 0:00 /usr/sbin/httpd.worker
apache 13334 0.0 0.0 546652 5328 ? Sl 14:28 0:00 /usr/sbin/httpd.worker
root 16841 0.0 0.0 6004 628 pts/0 S+ 15:48 0:00 grep apache
root 20478 0.0 0.0 88724 3640 ? Sl 04:02 0:01 /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11/ext/apache2/ApplicationPoolServerExecutable 0 /usr/local/lib/ruby/gems/1.8/gems/passenger-2.2.11/bin/passenger-spawn-server /usr/local/bin/ruby /tmp/passenger.30916
apache 20704 0.0 1.7 251080 135164 ? S 04:02 0:06 Rails: /var/www/apps/myapp/current
apache 20706 0.2 1.7 255188 137704 ? S 04:02 1:52 Rails: /var/www/apps/myapp/current
apache 20708 0.9 1.7 255180 139332 ? S 04:02 6:26 Rails: /var/www/apps/myapp/current
The server is on UTC, so 04:02 corresponds to 12:02am my time (EDT).

Assuming that lograte is the culprit, I'd suggest using the copytruncate feature instead of reloading on postrotate. copytruncate isn't atomic, meaning you could lose a couple second's worth of logs. You'll also briefly double the disk space consumed by that log file. Here's some details.
/var/log/apache2/*.log {
weekly
missingok
rotate 52
compress
delaycompress
notifempty
create 640 root adm
sharedscripts
copytruncate
#postrotate
# /etc/init.d/apache2 reload > /dev/null
endscript
}

You could send your logs to a program which logs to a file based on date and eliminates logrotate...
CustomLog "|/usr/local/bin/my_log_script" combined

I discovered what was happening. Here is my logrotate conf file for httpd:
/var/log/httpd/*log {
missingok
notifempty
sharedscripts
postrotate
/sbin/service httpd reload > /dev/null 2>/dev/null || true
endscript
}
It's the postrotate script that is doing it. Reloading apache causes the passenger processes to die off.
Anyone have any good suggestions for how to do this without having to reload apache? Or a way to reload apache without killing off the passenger processes (if that's possible)?

Easiest way to logrotate without restarting/reloading a service is to use 'copyontruncate' option. That way logrotate will copy the contents of a log file to another file, and empty the current log file. That way service continues to log to the same file, and logrotate does it's thing. For example:
/var/log/httpd/*log {
copyontruncate
missingok
notifempty
sharedscripts
}

Related

pause container have pid 1 in the pod?

[root#k8s001 ~]# docker exec -it f72edf025141 /bin/bash
root#b33f3b7c705d:/var/lib/ghost# ps aux`enter code here`
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 1012 4 ? Ss 02:45 0:00 /pause
root 8 0.0 0.0 10648 3400 ? Ss 02:57 0:00 nginx: master process nginx -g daemon off;
101 37 0.0 0.0 11088 1964 ? S 02:57 0:00 nginx: worker process
node 38 0.9 0.0 2006968 116572 ? Ssl 02:58 0:06 node current/index.js
root 108 0.0 0.0 3960 2076 pts/0 Ss 03:09 0:00 /bin/bash
root 439 0.0 0.0 7628 1400 pts/0 R+ 03:10 0:00 ps aux
The display come from internet, it says pause container is the parent process of other containers in the pod, if you attach pod or other containers, do ps aux, you would see that.
Is it correct, I do in my k8s,different, PID 1 is not /pause.
...Is it correct, I do in my k8s,different, PID 1 is not /pause.
This has changed, pause no longer hold PID 1 despite being the first container created by the container runtime to setup the pod (eg. cgroups, namespace etc). Pause is isolated (hidden) from the rest of the containers in the pod regardless of your ENTRYPOINT/CMD. See here for more background information.
By default, Docker will run your entrypoint (or the command, if there is no entrypoint) as PID 1. However, that is not necessarily always the case, since, depending on how you start the container, Docker (or your orchestrator) can also run its custom init process as PID 1:
$ docker run -d --init --name test alpine sleep infinity
849efe38ecec439550738e981065ec4aff55ef5607f03b9fed975e2d3146b9b0
$ with-docker docker exec -ti test ps
PID USER TIME COMMAND
1 root 0:00 /sbin/docker-init -- sleep infinity
7 root 0:00 sleep infinity
8 root 0:00 ps
For more information on why you would want your entrypoint not to be PID 1, you can check this explanation from a tini developer:
Now, unlike other processes, PID 1 has a unique responsibility, which is to reap zombie processes.
Zombie processes are processes that:
Have exited.
Were not waited on by their parent process (wait is the syscall parent processes use to retrieve the exit code of their children).
Have lost their parent (i.e. their parent exited as well), which means they'll never be waited on by their parent.

Docker container increases ram

I have launched several docker containers and using docker stats, I have verified that one of them increases the consumption of ram memory since it starts until it is restarted.
My question is if there is any way to verify where such consumption comes from within the docker container. There is some way to check the consumption inside the container, something of the docker stats style but for the inside of the container.
Thanks for your cooperation.
Not sure if it's what you are asking for, but here's an example:
(Before your start):
Run a test container docker run --rm -it ubuntu
Install stress by typing apt-get update and apt-get install stress
Run stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 (it will start consuming memory)
1. with top
If you go to a new terminal you can type docker container exec -it <your container name> top and you will get something like the following:
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang top
top - 12:46:04 up 22 min, 0 users, load average: 1.48, 1.55, 1.12
Tasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.8 us, 0.8 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102828 total, 150212 free, 5396604 used, 556012 buff/cache
KiB Swap: 1942896 total, 1937508 free, 5388 used. 455368 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
285 root 20 0 4209376 4.007g 212 R 100.0 68.8 6:56.90 stress
1 root 20 0 18500 3148 2916 S 0.0 0.1 0:00.09 bash
274 root 20 0 36596 3072 2640 R 0.0 0.1 0:00.21 top
284 root 20 0 8240 1192 1116 S 0.0 0.0 0:00.00 stress
2. with ps aux
Again, from a new terminal you type docker container exec -it <your container name> ps aux
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18500 3148 pts/0 Ss 12:25 0:00 /bin/bash
root 284 0.0 0.0 8240 1192 pts/0 S+ 12:39 0:00 stress --vm-byt
root 285 99.8 68.8 4209376 4201300 pts/0 R+ 12:39 8:53 stress --vm-byt
root 286 0.0 0.0 34400 2904 pts/1 Rs+ 12:48 0:00 ps aux
My source for this stress thing is from this question: How to fill 90% of the free memory?

Haproxy reload with different backend server ip

Is it possible to reload haproxy while the backend server ip changed? If, how?
It is essential for docker stack. On every deploy, new containers with different ip will replace the old containers.
In our implementation, services return 503 occasionally as the old haproxy process is not terminated and still accepting request, while the backend server is already gone. httplog show that some requests forward a backend which is gone.
# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 893 0.0 0.0 0 0 ? Zs 19:39 0:01 [haproxy] <defunct>
root 898 0.3 0.0 49416 9640 ? Ss 19:49 0:13 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 915 0.2 0.0 0 0 ? Zs 19:49 0:12 [haproxy] <defunct>
root 920 0.2 0.0 49308 10196 ? Ss 20:57 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 937 0.0 0.0 0 0 ? Zs 20:57 0:00 [haproxy] <defunct>
root 942 0.3 0.0 49296 9880 ? Ss 20:58 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 959 0.2 0.0 49296 9852 ? Ss 20:58 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
[Edit]
I am using docker swarm mode. I did try with publish service's port to the host; however, the performance of the swarm’s internal load balancer is bad, and I try to avoid.
While it should be possible to change the HAProxy configuration to point to a different backend server, it seems like it would be easier to bind the Docker containers' ports to predictable ports on the Docker host, so the HAProxy config does not need to change.
For example:
docker run -d -p 127.0.0.1:80:9999 hello_world
And your HAProxy config could look like
backend something
# Assuming the Docker host's IP address is 192.0.2.123
server some-server 192.0.2.123:9999

docker apache passenger: error cannot load such file bundler/setup (LoadError)

I'm trying to build a docker-image with running apache (+passenger), rails and shibboleth.
Unfortunately I can't get apache + passenger running ...
I appreciate every hint! Maybe it is a permission problem? Everything was installed as root, but obviously some processes are running as nobody (as shown in the error log).
My docker base-image is "ruby:2.0.0" (debian 8). In this image I installed apache2, apache2-threaded-dev, libapr1-dev, libaprutil1-dev via apt-get and passenger via 'gem install passenger -v 4.0.59'. After this I used passenger-install-apache2-module to install the module.
Here is the error log:
cannot load such file -- bundler/setup (LoadError)
/usr/local/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
/usr/local/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
/usr/lib/ruby/vendor_ruby/phusion_passenger/loader_shared_helpers.rb:411:in `activate_gem'
/usr/lib/ruby/vendor_ruby/phusion_passenger/loader_shared_helpers.rb:295:in `block in run_load_path_setup_code'
/usr/lib/ruby/vendor_ruby/phusion_passenger/loader_shared_helpers.rb:416:in `running_bundler'
/usr/lib/ruby/vendor_ruby/phusion_passenger/loader_shared_helpers.rb:294:in `run_load_path_setup_code'
/usr/share/passenger/helper-scripts/rack-preloader.rb:99:in `preload_app'
/usr/share/passenger/helper-scripts/rack-preloader.rb:153:in `<module:App>'
/usr/share/passenger/helper-scripts/rack-preloader.rb:29:in `<module:PhusionPassenger>'
/usr/share/passenger/helper-scripts/rack-preloader.rb:28:in `<main>'
Environment (value of RAILS_ENV, RACK_ENV, WSGI_ENV, NODE_ENV and PASSENGER_APP_ENV)
development
Ruby interpreter command
/usr/local/bin/ruby
User and groups
uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup)
Apache passenger.load:
LoadModule passenger_module /usr/local/bundle/gems/passenger-4.0.59/buildout/apache2/mod_passenger.so
Apache passenger.conf:
IfModule mod_passenger.c>
PassengerRoot /usr/local/bundle/gems/passenger-4.0.59
PassengerDefaultRuby /usr/local/bin/ruby
</IfModule>
And myapp.conf:
<VirtualHost *:80>
#ServerName yourserver.com
# Tell Apache and Passenger where your app's 'public' directory is
DocumentRoot /var/www/myapp/public
PassengerRuby /usr/local/bin/ruby
RailsEnv development
# Relax Apache security settings
<Directory /var/www/myapp/public>
Allow from all
Options -MultiViews
# Uncomment this if you're on Apache >= 2.4:
Require all granted
</Directory>
Installed versions:
apache2 -v
Server version: Apache/2.4.10 (Debian)
ruby -v
ruby 2.0.0p645 (2015-04-13 revision 50299)
gem -v
2.0.14
rails -v
Rails 4.0.5
passenger-config validate-install says "Everything looks good". And 'passenger-status':
Version : 4.0.59
Date : 2015-10-13 09:03:32 +0000
Instance: 5578
----------- General information -----------
Max pool size : 6
Processes : 0
Requests in top-level queue : 0
----------- Application groups -----------
/var/www/myapp#default:
App root: /var/www/myapp
Requests in queue: 0
passenger-memory-stats:
Version: 4.0.59
Date : 2015-10-13 09:05:31 +0000
--------- Apache processes ---------
PID PPID VMSize Private Name
------------------------------------
5578 1 83.2 MB ? /usr/sbin/apache2 -k start
5599 5578 363.5 MB ? /usr/sbin/apache2 -k start
5600 5578 491.5 MB ? /usr/sbin/apache2 -k start
### Processes: 3
### Total private dirty RSS: 0.00 MB (?)
-------- Nginx processes --------
### Processes: 0
### Total private dirty RSS: 0.00 MB
---- Passenger processes -----
PID VMSize Private Name
------------------------------
5581 218.3 MB ? PassengerWatchdog
5584 564.5 MB ? PassengerHelperAgent
5590 217.8 MB ? PassengerLoggingAgent
### Processes: 3
### Total private dirty RSS: 0.00 MB (?)
All running processes:
ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 20300 1780 ? Ss 08:47 0:00 bash
root 6077 0.0 0.0 85160 3208 ? Ss 09:11 0:00 /usr/sbin/apache2 -k start
root 6080 0.0 0.0 223500 2044 ? Ssl 09:11 0:00 PassengerWatchdog
root 6083 0.0 0.0 578092 5556 ? Sl 09:11 0:00 PassengerHelperAgent
nobody 6089 0.0 0.0 223028 5008 ? Sl 09:11 0:00 PassengerLoggingAgent
www-data 6098 0.0 0.0 437788 5452 ? Sl 09:11 0:00 /usr/sbin/apache2 -k start
www-data 6099 0.0 0.0 437780 5300 ? Sl 09:11 0:00 /usr/sbin/apache2 -k start
EDIT
After 2 days searching and trying I found a solution (right after I post here my question ...):
I have to put this into my apache virtual host configuration of my app:
SetEnv GEM_HOME /usr/local/bundle
This solution was postet on https://stackoverflow.com/a/19099768/4846489
I don't know why this is necessary, because I don't have a previous installation (as stated there). This is really strange, because this environment variable is already there if I login into my docker container (docker exec -u nobody)...
Setting GEM_HOME just patches over the real problem. This information here is your hint:
User and groups:
id=65534(nobody) gid=65534(nogroup) groups=65534(nogroup)
Passenger is trying to run your app as the user 'nobody'. Most likely, this is not what you meant it to do. Your gem bundle is probably installed by a different user, and the 'nobody' user probably does not have access to that installed gem bundle.
Why is Passenger running your app as 'nobody'? Because of user sandboxing rules, most likely caused by wrong permissions on your app. You should fix that.
By the way, why are you building your own Docker image? Phusion provides its own passenger-docker base image.

passenger + nginx fails when I disconnect from terminal

I use Ubuntu 12.04 LTS, rvm, passenger and nginx installed by passenger.
I connect to my server with Putty, start nginx via init.d/nginx and my Rails application works well.
But when I disconnect from terminal, I see standard application errors (Something went wrong, etc.).
nginx error log output:
<internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- rubygems (LoadError)
from <internal:lib/rubygems/custom_require>:29:in `require'
from <internal:gem_prelude>:167:in `load_full_rubygems_library'
from <internal:gem_prelude>:217:in `try_activate'
from <internal:lib/rubygems/custom_require>:32:in `rescue in require'
from <internal:lib/rubygems/custom_require>:29:in `require'
from /var/lib/passenger-standalone/3.0.18-x86_64-ruby1.9.3-linux-gcc4.6.3-1002/support/helper-scripts/passenger-spawn-server:75:in `<main>'
*** Passenger ERROR (ext/common/ApplicationPool/../SpawnManager.h:220):
Could not start the spawn server: /home/torteg/.rvm/wrappers/ruby-1.9.2-p320/ruby: No such file or directory (2)
*** Passenger ERROR (ext/common/ApplicationPool/../SpawnManager.h:220):
Could not start the spawn server: /home/torteg/.rvm/wrappers/ruby-1.9.2-p320/ruby: No such file or directory (2)
ps aux output:
root 5066 0.0 0.0 220928 1936 ? Ssl 15:46 0:00 PassengerWatchdog
root 5069 0.0 0.0 1872956 2340 ? Sl 15:46 0:00 PassengerHelperAgent
root 5071 0.5 0.2 114348 10172 ? Sl 15:46 0:00 Passenger spawn server
nobody 5074 0.0 0.1 169324 4688 ? Sl 15:46 0:00 PassengerLoggingAgent
root 5105 0.0 0.0 39472 1028 ? Ss 15:46 0:00 nginx: master process /opt/nginx/sbin/nginx
torteg 5106 0.0 0.0 39892 2276 ? S 15:46 0:00 nginx: worker process
torteg 5116 13.2 1.5 225720 62432 ? Sl 15:46 0:03 Passenger ApplicationSpawner: /webapps/ngt-storage
torteg 5132 2.4 1.5 230940 64520 ? Sl 15:46 0:00 Rack: /webapps/ngt-storage
root 5141 0.1 0.1 160656 7272 ? Ss 15:47 0:00 sshd: torteg [priv]
torteg 5145 0.0 0.0 164168 1820 ? S 15:47 0:00 sshd: torteg [priv]
torteg 5291 0.0 0.0 160656 2656 ? S 15:47 0:00 sshd: torteg#pts/3
So when you ssh into your production server, somewhere else in the world (or in the cloud) and you visit mydomain.com it works. As soon as you log out of that ssh connection, nginx and passenger stop working? How are these two independent events tied to each other?
What service are you using to host this app?
Possible answer (will clean this up when you get answers to us)
I see you are using rvm too... unless the rvm path isn't set in your deploy user (just thinking out loud)
Created new user deploy with default bash shell. Installed rvm for this user. Then I set user and passenger_user to deploy in nginx.conf. Cleaned precompiled assets.. Works well!

Resources