Heavy load web service because of blocked Phusion Passenger queues - ruby-on-rails

We are developing a Web service using Ruby 2 on Rails 4, Mongoid 4, MongoDB 2.6. It uses Sidekiq 3.3.0 and Redis 2.8 and is running on Phusion Passenger 5.0.4 + Nginx 1.7.10. It only serves mobile clients & AngularJS web client via JSON APIs.
Normally everything works fine, APIs are processed and responded under 1s. But in rush hours, the service is heavy load (APIs are rendered as 503 Service Unavailable). Below are our Nginx & Mongoid config:
Nginx config
passenger_root /home/deployer/.rvm/gems/ruby-2.1.3/gems/passenger-4.0.53;
#passenger_ruby /usr/bin/ruby;
passenger_max_pool_size 70;
passenger_min_instances 1;
passenger_max_requests 20; # A workaround if apps are mem-leaking
passenger_pool_idle_time 300;
passenger_max_instances_per_app 30;
passenger_pre_start http://production_domain/;
## Note: there're 2 apps with the same config
server {
listen 80;
server_name production_domain;
passenger_enabled on;
root /home/deployer/app_name-production/current/public;
more_set_headers 'Access-Control-Allow-Origin: *'
more_set_headers 'Access-Control-Allow-Methods: POST, GET, OPTIONS, PUT, DELETE, HEAD';
more_set_headers 'Access-Control-Allow-Headers: DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
if ($request_method = 'OPTIONS') {
# more_set_headers 'Access-Control-Allow-Origin: *';
# add_header 'Access-Control-Allow-Origin' '*';
# add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
# add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,X-FooA$
# add_header 'Access-Control-Max-Age' 1728000;
# add_header 'Content-Type' 'text/plain charset=UTF-8';
# add_header 'Content-Length' 0;
return 200;
}
access_log /var/log/nginx/app_name-production.access.log;
error_log /var/log/nginx/app_name-production.error.log;
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /etc/nginx/html/;
}
rails_env production;
}
Mongoid config
development:
sessions:
default:
another:
uri: mongodb://127.0.0.1:27017/database_name
test:
sessions:
default:
another:
uri: mongodb://127.0.0.1:27017/database_name
options:
pool_size: 10
pool_timeout: 15
retry_interval: 1
max_retries: 30
refresh_interval: 10
timeout: 15
staging:
sessions:
default:
another:
uri: mongodb://staging_domain/staging_database
options:
pool_size: 10
pool_timeout: 15
retry_interval: 1
max_retries: 30
refresh_interval: 10
timeout: 15
production:
sessions:
default:
another:
uri: mongodb://production_domain/production_database
options:
pool_size: 30
pool_timeout: 15
retry_interval: 1
max_retries: 30
refresh_interval: 10
timeout: 15
Sidekiq config
and Passenger logs when heavy load:
Version : 5.0.4
Date : 2015-04-04 09:31:14 +0700
Instance: MxPcaaBy (nginx/1.7.10 Phusion_Passenger/5.0.4)
----------- General information -----------
Max pool size : 120
Processes : 62
Requests in top-level queue : 0
----------- Application groups -----------
/home/deployer/memo_rails-staging/current/public (staging)#default:
App root: /home/deployer/memo_rails-staging/current
Requests in queue: 0
* PID: 20453 Sessions: 0 Processed: 639 Uptime: 14h 34m 26s
CPU: 0% Memory : 184M Last used: 14s ago
* PID: 402 Sessions: 0 Processed: 5 Uptime: 13h 0m 42s
CPU: 0% Memory : 171M Last used: 23m 35s
* PID: 16081 Sessions: 0 Processed: 3 Uptime: 10h 26m 9s
CPU: 0% Memory : 163M Last used: 24m 9s a
* PID: 30300 Sessions: 0 Processed: 1 Uptime: 4h 19m 43s
CPU: 0% Memory : 164M Last used: 24m 15s
/home/deployer/memo_rails-production/current/public (production)#default:
App root: /home/deployer/memo_rails-production/current
Requests in queue: 150
* PID: 25924 Sessions: 1 Processed: 841 Uptime: 20m 49s
CPU: 3% Memory : 182M Last used: 7m 58s ago
* PID: 25935 Sessions: 1 Processed: 498 Uptime: 20m 49s
CPU: 2% Memory : 199M Last used: 5m 40s ago
* PID: 25948 Sessions: 1 Processed: 322 Uptime: 20m 49s
CPU: 1% Memory : 200M Last used: 7m 57s ago
* PID: 25960 Sessions: 1 Processed: 177 Uptime: 20m 49s
CPU: 0% Memory : 158M Last used: 19s ago
* PID: 25972 Sessions: 1 Processed: 115 Uptime: 20m 48s
CPU: 0% Memory : 151M Last used: 7m 56s ago
* PID: 25987 Sessions: 1 Processed: 98 Uptime: 20m 48s
CPU: 0% Memory : 179M Last used: 7m 56s ago
* PID: 25998 Sessions: 1 Processed: 77 Uptime: 20m 48s
CPU: 0% Memory : 145M Last used: 7m 2s ago
* PID: 26012 Sessions: 1 Processed: 97 Uptime: 20m 48s
CPU: 0% Memory : 167M Last used: 19s ago
* PID: 26024 Sessions: 1 Processed: 42 Uptime: 20m 47s
CPU: 0% Memory : 148M Last used: 7m 55s ago
* PID: 26038 Sessions: 1 Processed: 44 Uptime: 20m 47s
CPU: 0% Memory : 164M Last used: 1m 0s ago
* PID: 26050 Sessions: 1 Processed: 29 Uptime: 20m 47s
CPU: 0% Memory : 142M Last used: 7m 54s ago
* PID: 26063 Sessions: 1 Processed: 41 Uptime: 20m 47s
CPU: 0% Memory : 168M Last used: 1m 1s ago
* PID: 26075 Sessions: 1 Processed: 23 Uptime: 20m 47s
CPU: 0% Memory : 126M Last used: 7m 51s ago
* PID: 26087 Sessions: 1 Processed: 19 Uptime: 20m 46s
CPU: 0% Memory : 120M Last used: 7m 50s ago
* PID: 26099 Sessions: 1 Processed: 37 Uptime: 20m 46s
CPU: 0% Memory : 131M Last used: 7m 3s ago
* PID: 26111 Sessions: 1 Processed: 20 Uptime: 20m 46s
CPU: 0% Memory : 110M Last used: 7m 49s ago
* PID: 26126 Sessions: 1 Processed: 28 Uptime: 20m 46s
CPU: 0% Memory : 172M Last used: 1m 56s ago
* PID: 26141 Sessions: 1 Processed: 20 Uptime: 20m 45s
CPU: 0% Memory : 107M Last used: 7m 19s ago
* PID: 26229 Sessions: 1 Processed: 20 Uptime: 20m 21s
CPU: 0% Memory : 110M Last used: 11s ago
* PID: 26241 Sessions: 1 Processed: 9 Uptime: 20m 21s
CPU: 0% Memory : 105M Last used: 7m 47s ago
* PID: 26548 Sessions: 1 Processed: 23 Uptime: 19m 14s
CPU: 0% Memory : 125M Last used: 7m 44s ago
* PID: 27465 Sessions: 1 Processed: 30 Uptime: 15m 23s
CPU: 0% Memory : 109M Last used: 2m 22s ago
* PID: 27501 Sessions: 1 Processed: 28 Uptime: 15m 18s
CPU: 0% Memory : 117M Last used: 7m 15s ago
* PID: 27511 Sessions: 1 Processed: 34 Uptime: 15m 17s
CPU: 0% Memory : 144M Last used: 5m 40s ago
* PID: 27522 Sessions: 1 Processed: 30 Uptime: 15m 17s
CPU: 0% Memory : 110M Last used: 26s ago
* PID: 27533 Sessions: 1 Processed: 38 Uptime: 15m 17s
CPU: 0% Memory : 110M Last used:"4m 44s ago
* PID: 27555 Sessions: 1 Processed: 27 Uptime: 15m 15s
CPU: 0% Memory : 120M Last used: 1m 29s ago
* PID: 27570 Sessions: 1 Processed: 21 Uptime: 15m 14s
CPU: 0% Memory : 107M Last used: 7m 1s ago
* PID: 27590 Sessions: 1 Processed: 8 Uptime: 15m 13s
CPU: 0% Memory : 105M Last used: 7m 34s ago
* PID: 27599 Sessions: 1 Processed: 13 Uptime: 15m 13s
CPU: 0% Memory : 107M Last used: 7m 0s ago
* PID: 27617 Sessions: 1 Processed: 26 Uptime: 15m 12s
CPU: 0% Memory : 114M Last used: 4m 49s ago
* PID: 27633 Sessions: 1 Processed: 19 Uptime: 15m 11s
CPU: 0% Memory : 137M Last used: 1m 14s ago
* PID: 27643 Sessions: 1 Processed: 15 Uptime: 15m 11s
CPU: 0% Memory : 132M Last used: 6m 19s ago
* PID: 27661 Sessions: 1 Processed: 23 Uptime: 15m 10s
CPU: 0% Memory : 112M Last used: 9s ago
* PID: 27678 Sessions: 1 Processed: 24 Uptime: 15m 9s
CPU: 0% Memory : 108M Last used: 6m 53s ago
* PID: 27692 Sessions: 1 Processed: 9 Uptime: 15m 9s
CPU: 0% Memory : 105M Last used: 7m 22s ago
* PID: 28400 Sessions: 1 Processed: 19 Uptime: 12m 45s
CPU: 0% Memory : 111M Last used: 1m 25s ago
* PID: 28415 Sessions: 1 Processed: 26 Uptime: 12m 45s
CPU: 0% Memory : 149M Last used: 3m 45s ago
* PID: 28439 Sessions: 1 Processed: 14 Uptime: 12m 44s
CPU: 0% Memory : 106M Last used: 59s ago
* PID: 28477 Sessions: 1 Processed: 12 Uptime: 12m 42s
CPU: 0% Memory : 108M Last used: 1m 34s ago
* PID: 28495 Sessions: 1 Processed: 14 Uptime: 12m 41s
CPU: 0% Memory : 108M Last used: 18s ago
* PID: 29315 Sessions: 1 Processed: 7 Uptime: 10m 1s
CPU: 0% Memory : 107M Last used: 7m 0s ago
* PID: 29332 Sessions: 1 Processed: 13 Uptime: 10m 0s
CPU: 0% Memory : 108M Last used: 5m 39s ago
* PID: 29341 Sessions: 1 Processed: 7 Uptime: 10m 0s
CPU: 0% Memory : 105M Last used: 6m 53s ago
* PID: 29353 Sessions: 1 Processed: 11 Uptime: 10m 0s
CPU: 0% Memory : 119M Last used: 5m 4s ago
* PID: 29366 Sessions: 1 Processed: 16 Uptime: 9m 59s
CPU: 0% Memory : 119M Last used: 3m 13s ago
* PID: 29377 Sessions: 1 Processed: 10 Uptime: 9m 59s
CPU: 0% Memory : 113M Last used: 1m 34s ago
* PID: 29388 Sessions: 1 Processed: 2 Uptime: 9m 59s
CPU: 0% Memory : 97M Last used: 7m 28s ago
* PID: 29400 Sessions: 1 Processed: 6 Uptime: 9m 59s
CPU: 0% Memory : 103M Last used: 6m 53s ago
* PID: 29422 Sessions: 1 Processed: 17 Uptime: 9m 58s
CPU: 0% Memory : 132M Last used: 1m 24s ago
* PID: 29438 Sessions: 1 Processed: 1 Uptime: 9m 57s
CPU: 0% Memory : 96M Last used: 6m 52s ago
* PID: 29451 Sessions: 1 Processed: 21 Uptime: 9m 56s
CPU: 0% Memory : 133M Last used: 2m 10s ago
* PID: 29463 Sessions: 1 Processed: 19 Uptime: 9m 56s
CPU: 0% Memory : 111M Last used: 27s ago
* PID: 29477 Sessions: 1 Processed: 23 Uptime: 9m 56s
CPU: 0% Memory : 117M Last used: 14s ago
* PID: 30625 Sessions: 1 Processed: 7 Uptime: 6m 49s
CPU: 0% Memory : 106M Last used: 1m 21s ago
* PID: 30668 Sessions: 1 Processed: 2 Uptime: 6m 44s
CPU: 0% Memory : 105M Last used: 1m 13s ago
* PID: 30706 Sessions: 1 Processed: 16 Uptime: 6m 43s
CPU: 0% Memory : 148M Last used: 1m 11s ago
* PID: 30718 Sessions: 1 Processed: 12 Uptime: 6m 43s
CPU: 0% Memory : 112M Last used: 1m 16s ago
I have some questions:
It seems to be that someone with slow internet connection is requesting our service, leading to Passenger processes is blocked. We have to restart Nginx to get the web service works again. Anyone has experiences with this?
We also use Sidekiq as worker queues. Most of our Workers are implemented without hitting MongoDB. They work fine.
But there are 2 Workers we use to update User's data, which query, update & insert data into the database. We've tried to optimize these all tasks using MongoDB bulk commands (update & insert).
Normally, when a small amount of users request the web service, the Workers work fine, busy queues are processed in about 1 minutes, but when it receives more requests, busy queues block the whole system. We have to restart the Nginx, again, to get it works. Below are Sidekiq config:
development:
:concurrency: 5
:logfile: ./log/sidekiq_development.log
:pidfile: ./log/sidekiq.pid
staging:
:concurrency: 5
:logfile: ./log/sidekiq_staging.log
:pidfile: ./log/sidekiq.pid
production:
:concurrency: 15
:logfile: ./log/sidekiq_production.log
:pidfile: ./log/sidekiq.pid
:queues:
- ...
We don't have any experiences with those problems. Anyone has any ideas?
Update 1:
After some monitorings when server is heavy load, we got this result: the MongoDB processes have many faults & stacked read queues, below are what mongostat logged during the downtime:
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn time
*0 2 *0 *0 0 4|0 0 79g 160g 3.36g 137 memo_v2:2.6% 0 17|0 8|0 36k 8k 61 15:05:22
*0 6 *0 *0 0 1|0 0 79g 160g 3.38g 144 memo_v2:2.1% 0 30|0 3|0 722b 11k 61 15:05:23
1595 15 1 *0 0 5|0 0 79g 160g 3.41g 139 memo_v2:19.7% 0 20|0 8|0 164k 179k 61 15:05:25
1 18 2 *0 1 6|0 0 79g 160g 3.38g 198 memo_v2:14.4% 0 31|0 1|0 3k 122k 61 15:05:26
2 20 4 *0 0 7|0 0 79g 160g 3.38g 169 memo_v2:8.6% 0 29|0 1|0 3k 157k 61 15:05:27
1 6 23 *0 0 4|0 0 79g 160g 3.39g 190 memo_v2:18.7% 0 32|0 1|0 1k 63k 61 15:05:28
1 4 42 *0 0 4|0 0 79g 160g 3.1g 115 memo_v2:35.9% 0 30|0 0|1 1k 20k 61 15:05:29
1 5 51 *0 0 4|0 0 79g 160g 3.11g 177 memo_v2:30.0% 0 28|0 1|0 1k 23k 61 15:05:30
*0 6 20 *0 0 2|0 0 79g 160g 3.12g 174 memo_v2:40.9% 0 28|0 1|0 15k 7k 61 15:05:31
2 9 *0 *0 1 7|0 0 79g 160g 3.1g 236 memo_v2:4.4% 0 26|0 2|0 2k 31k 61 15:05:32
Anyone faced this before?

I don't have enough reputation to post a comment, so I'll have to add a very lacklustre answer.
I don't have any experience with the stack, but if you are correct in that slow clients are the cause of the passenger issues then I'd suggest that you ensure there is adequate buffering in front of the passenger processes.
For nginx, the important setting looks to be proxy_buffers. The section titled "Using Buffers to Free Up Backend Servers" in following article talks about the nginx module: https://www.digitalocean.com/community/tutorials/understanding-nginx-http-proxying-load-balancing-buffering-and-caching
For the mongodb issue, it sounds like you just need to dig a little deeper. If you can find where the issue is happening in the code, the solution will probably present itself. The article linked by Hongli looks very good for that.

Related

Passenger Full Queue

Whenever we get large traffic spikes on our Rails 4 app, it seems passenger can't handle the load and starts showing the dreaded This website is under heavy load (queue full) error page.
I'm trying to optimize things as much as i can on the app side, but looking at the status of passenger when this occurs, seems to me a lot of the processes are just stuck:
sudo passenger-status
Version : 5.3.5
Date : 2021-03-18 21:59:52 +0000
Instance: xGyoOAEC (nginx/1.18.0 Phusion_Passenger/5.3.5)
----------- General information -----------
Max pool size : 160
App groups : 2
Processes : 160
Requests in top-level queue : 0
----------- Application groups -----------
/var/www/app1 (production):
App root: /var/www/app1
Requests in queue: 0
* PID: 18014 Sessions: 1 Processed: 113 Uptime: 9m 21s
CPU: 0% Memory : 170M Last used: 1s ago
* PID: 18076 Sessions: 0 Processed: 12 Uptime: 9m 19s
CPU: 0% Memory : 131M Last used: 1m 44s ago
/var/www/app2 (production):
App root: /var/www/app2
Requests in queue: 250
* PID: 17786 Sessions: 1 Processed: 13 Uptime: 9m 24s
CPU: 0% Memory : 85M Last used: 1m 24s ago
* PID: 17833 Sessions: 1 Processed: 23 Uptime: 9m 22s
CPU: 0% Memory : 119M Last used: 1m 22s ago
* PID: 17978 Sessions: 1 Processed: 16 Uptime: 9m 21s
CPU: 0% Memory : 118M Last used: 1m 20s ago
* PID: 18122 Sessions: 1 Processed: 16 Uptime: 9m 19s
CPU: 0% Memory : 102M Last used: 1m 18s ago
* PID: 18183 Sessions: 1 Processed: 11 Uptime: 9m 17s
CPU: 0% Memory : 95M Last used: 1m 16s ago
* PID: 18233 Sessions: 1 Processed: 5 Uptime: 9m 15s
CPU: 0% Memory : 117M Last used: 1m 14s ago
* PID: 18289 Sessions: 1 Processed: 14 Uptime: 9m 13s
CPU: 0% Memory : 124M Last used: 3m 13s ago
* PID: 18351 Sessions: 1 Processed: 18 Uptime: 9m 11s
CPU: 0% Memory : 96M Last used: 1m 11s ago
* PID: 18411 Sessions: 1 Processed: 24 Uptime: 9m 10s
CPU: 0% Memory : 97M Last used: 1m 9s ago
* PID: 18463 Sessions: 1 Processed: 14 Uptime: 9m 8s
CPU: 0% Memory : 98M Last used: 1m 7s ago
* PID: 18516 Sessions: 1 Processed: 29 Uptime: 9m 6s
CPU: 0% Memory : 98M Last used: 1m 5s ago
* PID: 18575 Sessions: 1 Processed: 15 Uptime: 9m 4s
CPU: 0% Memory : 95M Last used: 1m 4s ago
* PID: 18623 Sessions: 1 Processed: 22 Uptime: 9m 2s
CPU: 0% Memory : 100M Last used: 1m 2s ago
* PID: 18673 Sessions: 1 Processed: 32 Uptime: 9m 1s
CPU: 0% Memory : 99M Last used: 1m 0s ago
* PID: 18729 Sessions: 1 Processed: 15 Uptime: 8m 59s
CPU: 0% Memory : 86M Last used: 58s ago
* PID: 18788 Sessions: 1 Processed: 19 Uptime: 8m 57s
CPU: 0% Memory : 96M Last used: 56s ago
* PID: 18839 Sessions: 1 Processed: 29 Uptime: 8m 55s
CPU: 0% Memory : 98M Last used: 54s ago
* PID: 18895 Sessions: 1 Processed: 18 Uptime: 8m 53s
CPU: 0% Memory : 85M Last used: 53s ago
* PID: 18944 Sessions: 1 Processed: 19 Uptime: 8m 51s
CPU: 0% Memory : 103M Last used: 51s ago
* PID: 18998 Sessions: 1 Processed: 10 Uptime: 8m 50s
CPU: 0% Memory : 97M Last used: 2m 49s ago
* PID: 19061 Sessions: 1 Processed: 32 Uptime: 8m 48s
CPU: 0% Memory : 98M Last used: 47s ago
* PID: 19125 Sessions: 1 Processed: 17 Uptime: 8m 46s
CPU: 0% Memory : 99M Last used: 45s ago
* PID: 19183 Sessions: 1 Processed: 10 Uptime: 8m 44s
CPU: 0% Memory : 118M Last used: 43s ago
* PID: 19232 Sessions: 1 Processed: 8 Uptime: 8m 42s
CPU: 0% Memory : 98M Last used: 42s ago
* PID: 19286 Sessions: 1 Processed: 36 Uptime: 8m 41s
CPU: 0% Memory : 118M Last used: 40s ago
* PID: 19342 Sessions: 1 Processed: 16 Uptime: 8m 39s
CPU: 0% Memory : 118M Last used: 38s ago
* PID: 19395 Sessions: 1 Processed: 12 Uptime: 8m 37s
CPU: 0% Memory : 98M Last used: 36s ago
* PID: 19445 Sessions: 1 Processed: 11 Uptime: 8m 35s
CPU: 0% Memory : 95M Last used: 2m 35s ago
* PID: 19504 Sessions: 1 Processed: 12 Uptime: 8m 33s
CPU: 0% Memory : 103M Last used: 33s ago
* PID: 19557 Sessions: 1 Processed: 37 Uptime: 8m 31s
CPU: 0% Memory : 102M Last used: 31s ago
* PID: 19608 Sessions: 1 Processed: 11 Uptime: 8m 30s
CPU: 0% Memory : 116M Last used: 29s ago
* PID: 19664 Sessions: 1 Processed: 14 Uptime: 8m 28s
CPU: 0% Memory : 119M Last used: 27s ago
* PID: 19712 Sessions: 1 Processed: 19 Uptime: 8m 26s
CPU: 0% Memory : 97M Last used: 25s ago
* PID: 19774 Sessions: 1 Processed: 22 Uptime: 8m 24s
CPU: 0% Memory : 100M Last used: 23s ago
* PID: 19824 Sessions: 1 Processed: 19 Uptime: 8m 22s
CPU: 0% Memory : 115M Last used: 21s ago
* PID: 19954 Sessions: 1 Processed: 12 Uptime: 8m 20s
CPU: 0% Memory : 95M Last used: 20s ago
* PID: 20021 Sessions: 1 Processed: 4 Uptime: 8m 19s
CPU: 0% Memory : 117M Last used: 18s ago
* PID: 20073 Sessions: 1 Processed: 16 Uptime: 8m 17s
CPU: 0% Memory : 96M Last used: 16s ago
* PID: 20124 Sessions: 1 Processed: 11 Uptime: 8m 15s
CPU: 0% Memory : 95M Last used: 14s ago
* PID: 20182 Sessions: 1 Processed: 19 Uptime: 8m 13s
CPU: 0% Memory : 106M Last used: 13s ago
* PID: 20236 Sessions: 1 Processed: 8 Uptime: 8m 11s
CPU: 0% Memory : 100M Last used: 11s ago
* PID: 20287 Sessions: 1 Processed: 16 Uptime: 8m 10s
CPU: 0% Memory : 98M Last used: 9s ago
* PID: 20344 Sessions: 1 Processed: 7 Uptime: 8m 8s
CPU: 0% Memory : 100M Last used: 7s ago
* PID: 20397 Sessions: 1 Processed: 17 Uptime: 8m 6s
CPU: 0% Memory : 96M Last used: 5s ago
* PID: 20456 Sessions: 1 Processed: 11 Uptime: 8m 4s
CPU: 0% Memory : 118M Last used: 3s ago
* PID: 20510 Sessions: 1 Processed: 16 Uptime: 8m 2s
CPU: 0% Memory : 117M Last used: 2s ago
* PID: 20565 Sessions: 1 Processed: 6 Uptime: 8m 1s
CPU: 0% Memory : 97M Last used: 2m 0s ago
* PID: 20622 Sessions: 1 Processed: 8 Uptime: 7m 59s
CPU: 0% Memory : 94M Last used: 1m 58s ago
* PID: 20674 Sessions: 1 Processed: 9 Uptime: 7m 57s
CPU: 0% Memory : 120M Last used: 1m 56s ago
* PID: 20726 Sessions: 1 Processed: 12 Uptime: 7m 55s
CPU: 0% Memory : 118M Last used: 1m 54s ago
* PID: 20784 Sessions: 1 Processed: 25 Uptime: 7m 53s
CPU: 0% Memory : 97M Last used: 1m 53s ago
* PID: 20833 Sessions: 1 Processed: 6 Uptime: 7m 51s
CPU: 0% Memory : 117M Last used: 1m 51s ago
* PID: 20889 Sessions: 1 Processed: 3 Uptime: 7m 50s
CPU: 0% Memory : 84M Last used: 5m 49s ago
* PID: 20951 Sessions: 1 Processed: 19 Uptime: 7m 48s
CPU: 0% Memory : 99M Last used: 1m 47s ago
* PID: 21016 Sessions: 1 Processed: 25 Uptime: 7m 46s
CPU: 0% Memory : 100M Last used: 1m 46s ago
* PID: 21068 Sessions: 1 Processed: 5 Uptime: 7m 44s
CPU: 0% Memory : 94M Last used: 1m 44s ago
* PID: 21128 Sessions: 1 Processed: 6 Uptime: 7m 42s
CPU: 0% Memory : 85M Last used: 1m 42s ago
* PID: 21180 Sessions: 1 Processed: 23 Uptime: 7m 41s
CPU: 0% Memory : 99M Last used: 1m 40s ago
* PID: 21238 Sessions: 1 Processed: 10 Uptime: 7m 39s
CPU: 0% Memory : 97M Last used: 1m 38s ago
* PID: 21290 Sessions: 1 Processed: 14 Uptime: 7m 37s
CPU: 0% Memory : 85M Last used: 1m 37s ago
* PID: 21341 Sessions: 1 Processed: 9 Uptime: 7m 35s
CPU: 0% Memory : 97M Last used: 1m 35s ago
* PID: 21399 Sessions: 1 Processed: 13 Uptime: 7m 33s
CPU: 0% Memory : 98M Last used: 1m 33s ago
* PID: 21448 Sessions: 1 Processed: 11 Uptime: 7m 32s
CPU: 0% Memory : 91M Last used: 5m 31s ago
* PID: 21500 Sessions: 1 Processed: 12 Uptime: 7m 30s
CPU: 0% Memory : 95M Last used: 1m 29s ago
* PID: 21557 Sessions: 1 Processed: 17 Uptime: 7m 28s
CPU: 0% Memory : 98M Last used: 1m 27s ago
* PID: 21606 Sessions: 1 Processed: 8 Uptime: 7m 26s
CPU: 0% Memory : 85M Last used: 1m 26s ago
* PID: 21657 Sessions: 1 Processed: 22 Uptime: 7m 24s
CPU: 0% Memory : 86M Last used: 1m 24s ago
* PID: 21715 Sessions: 1 Processed: 13 Uptime: 7m 22s
CPU: 0% Memory : 97M Last used: 1m 22s ago
* PID: 21844 Sessions: 1 Processed: 8 Uptime: 7m 21s
CPU: 0% Memory : 90M Last used: 5m 20s ago
* PID: 21912 Sessions: 1 Processed: 21 Uptime: 7m 19s
CPU: 0% Memory : 96M Last used: 1m 18s ago
* PID: 21969 Sessions: 1 Processed: 7 Uptime: 7m 17s
CPU: 0% Memory : 94M Last used: 1m 16s ago
* PID: 22052 Sessions: 1 Processed: 28 Uptime: 7m 15s
CPU: 0% Memory : 121M Last used: 1m 14s ago
* PID: 22119 Sessions: 1 Processed: 16 Uptime: 7m 13s
CPU: 0% Memory : 95M Last used: 1m 13s ago
* PID: 22190 Sessions: 1 Processed: 21 Uptime: 7m 11s
CPU: 0% Memory : 99M Last used: 1m 11s ago
* PID: 22262 Sessions: 1 Processed: 8 Uptime: 7m 10s
CPU: 0% Memory : 85M Last used: 1m 9s ago
* PID: 22321 Sessions: 1 Processed: 14 Uptime: 7m 8s
CPU: 0% Memory : 85M Last used: 1m 8s ago
* PID: 22373 Sessions: 1 Processed: 8 Uptime: 7m 6s
CPU: 0% Memory : 117M Last used: 1m 5s ago
* PID: 22429 Sessions: 1 Processed: 14 Uptime: 7m 4s
CPU: 0% Memory : 95M Last used: 1m 4s ago
* PID: 22488 Sessions: 1 Processed: 19 Uptime: 7m 2s
CPU: 0% Memory : 85M Last used: 1m 2s ago
* PID: 22542 Sessions: 1 Processed: 8 Uptime: 7m 0s
CPU: 0% Memory : 94M Last used: 1m 0s ago
* PID: 22601 Sessions: 1 Processed: 16 Uptime: 6m 59s
CPU: 0% Memory : 96M Last used: 58s ago
* PID: 22655 Sessions: 1 Processed: 6 Uptime: 6m 57s
CPU: 0% Memory : 96M Last used: 56s ago
* PID: 22714 Sessions: 1 Processed: 24 Uptime: 6m 55s
CPU: 0% Memory : 96M Last used: 55s ago
* PID: 22772 Sessions: 1 Processed: 8 Uptime: 6m 53s
CPU: 0% Memory : 97M Last used: 53s ago
* PID: 22822 Sessions: 1 Processed: 21 Uptime: 6m 51s
CPU: 0% Memory : 96M Last used: 51s ago
* PID: 22878 Sessions: 1 Processed: 13 Uptime: 6m 50s
CPU: 0% Memory : 115M Last used: 49s ago
* PID: 22938 Sessions: 1 Processed: 14 Uptime: 6m 48s
CPU: 0% Memory : 86M Last used: 47s ago
* PID: 23008 Sessions: 1 Processed: 22 Uptime: 6m 46s
CPU: 0% Memory : 96M Last used: 45s ago
* PID: 23060 Sessions: 1 Processed: 16 Uptime: 6m 44s
CPU: 0% Memory : 117M Last used: 43s ago
* PID: 23118 Sessions: 1 Processed: 6 Uptime: 6m 42s
CPU: 0% Memory : 94M Last used: 42s ago
* PID: 23171 Sessions: 1 Processed: 9 Uptime: 6m 40s
CPU: 0% Memory : 117M Last used: 40s ago
* PID: 23230 Sessions: 1 Processed: 11 Uptime: 6m 39s
CPU: 0% Memory : 118M Last used: 38s ago
* PID: 23284 Sessions: 1 Processed: 3 Uptime: 6m 37s
CPU: 0% Memory : 84M Last used: 36s ago
* PID: 23341 Sessions: 1 Processed: 8 Uptime: 6m 35s
CPU: 0% Memory : 94M Last used: 34s ago
* PID: 23399 Sessions: 1 Processed: 6 Uptime: 6m 33s
CPU: 0% Memory : 94M Last used: 33s ago
* PID: 23452 Sessions: 1 Processed: 4 Uptime: 6m 31s
CPU: 0% Memory : 94M Last used: 2m 31s ago
* PID: 23503 Sessions: 1 Processed: 12 Uptime: 6m 29s
CPU: 0% Memory : 95M Last used: 29s ago
* PID: 23562 Sessions: 1 Processed: 23 Uptime: 6m 28s
CPU: 0% Memory : 96M Last used: 27s ago
* PID: 23613 Sessions: 1 Processed: 14 Uptime: 6m 26s
CPU: 0% Memory : 85M Last used: 25s ago
* PID: 23667 Sessions: 1 Processed: 17 Uptime: 6m 24s
CPU: 0% Memory : 117M Last used: 23s ago
* PID: 23727 Sessions: 1 Processed: 20 Uptime: 6m 22s
CPU: 0% Memory : 96M Last used: 22s ago
* PID: 23857 Sessions: 1 Processed: 20 Uptime: 6m 20s
CPU: 0% Memory : 96M Last used: 20s ago
* PID: 23914 Sessions: 1 Processed: 14 Uptime: 6m 19s
CPU: 0% Memory : 85M Last used: 18s ago
* PID: 23977 Sessions: 1 Processed: 20 Uptime: 6m 17s
CPU: 0% Memory : 86M Last used: 16s ago
* PID: 24031 Sessions: 1 Processed: 8 Uptime: 6m 15s
CPU: 0% Memory : 117M Last used: 14s ago
* PID: 24091 Sessions: 1 Processed: 13 Uptime: 6m 13s
CPU: 0% Memory : 117M Last used: 13s ago
* PID: 24143 Sessions: 1 Processed: 12 Uptime: 6m 11s
CPU: 0% Memory : 95M Last used: 11s ago
* PID: 24212 Sessions: 1 Processed: 14 Uptime: 6m 10s
CPU: 0% Memory : 98M Last used: 9s ago
* PID: 24271 Sessions: 1 Processed: 25 Uptime: 6m 8s
CPU: 0% Memory : 96M Last used: 7s ago
* PID: 24328 Sessions: 1 Processed: 19 Uptime: 6m 6s
CPU: 0% Memory : 95M Last used: 5s ago
* PID: 24382 Sessions: 1 Processed: 9 Uptime: 6m 4s
CPU: 0% Memory : 95M Last used: 3s ago
* PID: 24440 Sessions: 1 Processed: 10 Uptime: 6m 2s
CPU: 0% Memory : 94M Last used: 2s ago
* PID: 24495 Sessions: 1 Processed: 7 Uptime: 6m 0s
CPU: 0% Memory : 94M Last used: 2m 0s ago
* PID: 24547 Sessions: 1 Processed: 8 Uptime: 5m 59s
CPU: 0% Memory : 85M Last used: 5m 58s ago
* PID: 24611 Sessions: 1 Processed: 13 Uptime: 5m 57s
CPU: 0% Memory : 95M Last used: 1m 56s ago
* PID: 24665 Sessions: 1 Processed: 4 Uptime: 5m 55s
CPU: 0% Memory : 117M Last used: 1m 54s ago
* PID: 24729 Sessions: 1 Processed: 6 Uptime: 5m 53s
CPU: 0% Memory : 93M Last used: 1m 53s ago
* PID: 24781 Sessions: 1 Processed: 3 Uptime: 5m 51s
CPU: 0% Memory : 84M Last used: 1m 51s ago
* PID: 24839 Sessions: 1 Processed: 25 Uptime: 5m 49s
CPU: 0% Memory : 86M Last used: 1m 49s ago
* PID: 24906 Sessions: 1 Processed: 1 Uptime: 5m 48s
CPU: 0% Memory : 84M Last used: 5m 47s ago
* PID: 24973 Sessions: 1 Processed: 9 Uptime: 5m 46s
CPU: 0% Memory : 94M Last used: 1m 45s ago
* PID: 25031 Sessions: 1 Processed: 8 Uptime: 5m 44s
CPU: 0% Memory : 85M Last used: 1m 44s ago
* PID: 25090 Sessions: 1 Processed: 3 Uptime: 5m 42s
CPU: 0% Memory : 95M Last used: 1m 42s ago
* PID: 25144 Sessions: 1 Processed: 8 Uptime: 5m 40s
CPU: 0% Memory : 117M Last used: 1m 40s ago
* PID: 25196 Sessions: 1 Processed: 22 Uptime: 5m 38s
CPU: 0% Memory : 96M Last used: 1m 38s ago
* PID: 25258 Sessions: 1 Processed: 6 Uptime: 5m 37s
CPU: 0% Memory : 94M Last used: 1m 36s ago
* PID: 25313 Sessions: 1 Processed: 11 Uptime: 5m 35s
CPU: 0% Memory : 94M Last used: 1m 34s ago
* PID: 25372 Sessions: 1 Processed: 20 Uptime: 5m 33s
CPU: 0% Memory : 96M Last used: 1m 33s ago
* PID: 25426 Sessions: 1 Processed: 13 Uptime: 5m 31s
CPU: 0% Memory : 95M Last used: 1m 31s ago
* PID: 25478 Sessions: 1 Processed: 3 Uptime: 5m 29s
CPU: 0% Memory : 93M Last used: 1m 29s ago
* PID: 25545 Sessions: 1 Processed: 11 Uptime: 5m 28s
CPU: 0% Memory : 94M Last used: 1m 27s ago
* PID: 25607 Sessions: 1 Processed: 3 Uptime: 5m 26s
CPU: 0% Memory : 93M Last used: 1m 25s ago
* PID: 25658 Sessions: 1 Processed: 10 Uptime: 5m 24s
CPU: 0% Memory : 97M Last used: 1m 23s ago
* PID: 25720 Sessions: 1 Processed: 14 Uptime: 5m 22s
CPU: 0% Memory : 118M Last used: 1m 22s ago
* PID: 25850 Sessions: 1 Processed: 7 Uptime: 5m 20s
CPU: 0% Memory : 123M Last used: 1m 20s ago
* PID: 25909 Sessions: 1 Processed: 5 Uptime: 5m 18s
CPU: 0% Memory : 84M Last used: 1m 18s ago
* PID: 25973 Sessions: 1 Processed: 17 Uptime: 5m 17s
CPU: 0% Memory : 98M Last used: 1m 16s ago
* PID: 26028 Sessions: 1 Processed: 17 Uptime: 5m 15s
CPU: 0% Memory : 95M Last used: 1m 14s ago
* PID: 26095 Sessions: 1 Processed: 2 Uptime: 5m 13s
CPU: 0% Memory : 95M Last used: 1m 13s ago
* PID: 26146 Sessions: 1 Processed: 9 Uptime: 5m 11s
CPU: 0% Memory : 117M Last used: 1m 11s ago
* PID: 26201 Sessions: 1 Processed: 6 Uptime: 5m 9s
CPU: 0% Memory : 93M Last used: 1m 9s ago
* PID: 26264 Sessions: 1 Processed: 18 Uptime: 5m 7s
CPU: 0% Memory : 95M Last used: 1m 7s ago
* PID: 26318 Sessions: 1 Processed: 8 Uptime: 5m 6s
CPU: 0% Memory : 117M Last used: 1m 5s ago
* PID: 26373 Sessions: 1 Processed: 14 Uptime: 5m 4s
CPU: 0% Memory : 85M Last used: 1m 3s ago
* PID: 26439 Sessions: 1 Processed: 13 Uptime: 5m 2s
CPU: 0% Memory : 97M Last used: 1m 2s ago
* PID: 26494 Sessions: 1 Processed: 16 Uptime: 5m 0s
CPU: 0% Memory : 97M Last used: 1m 0s ago
* PID: 26547 Sessions: 1 Processed: 9 Uptime: 4m 58s
CPU: 0% Memory : 117M Last used: 58s ago
* PID: 26610 Sessions: 1 Processed: 3 Uptime: 4m 56s
CPU: 0% Memory : 93M Last used: 56s ago
* PID: 26665 Sessions: 1 Processed: 4 Uptime: 4m 55s
CPU: 0% Memory : 84M Last used: 54s ago
* PID: 26721 Sessions: 1 Processed: 3 Uptime: 4m 53s
CPU: 0% Memory : 84M Last used: 52s ago
* PID: 26777 Sessions: 1 Processed: 8 Uptime: 4m 51s
CPU: 0% Memory : 94M Last used: 51s ago
* PID: 26834 Sessions: 1 Processed: 15 Uptime: 4m 49s
CPU: 0% Memory : 117M Last used: 49s ago
* PID: 26895 Sessions: 1 Processed: 5 Uptime: 4m 47s
CPU: 0% Memory : 84M Last used: 47s ago
* PID: 26963 Sessions: 1 Processed: 8 Uptime: 4m 45s
CPU: 0% Memory : 97M Last used: 45s ago
* PID: 27020 Sessions: 1 Processed: 12 Uptime: 4m 44s
CPU: 0% Memory : 94M Last used: 43s ago
* PID: 27081 Sessions: 1 Processed: 6 Uptime: 4m 42s
CPU: 0% Memory : 94M Last used: 41s ago
* PID: 27136 Sessions: 1 Processed: 6 Uptime: 4m 40s
CPU: 0% Memory : 84M Last used: 40s ago
* PID: 27189 Sessions: 1 Processed: 5 Uptime: 4m 38s
CPU: 0% Memory : 96M Last used: 38s ago
Any idea what could be causing this and what i could tweak to prevent it?
EDIT
passenger.conf:
passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini;
passenger_ruby /usr/bin/passenger_free_ruby;
passenger_max_pool_size 160;
passenger_max_request_queue_size 250;

Kubernetes cluster "cni config uninitialized"

The problem i'm running into is very similar to the other existing post, except they all have the same solution therefore im creating a new thread.
The Problem:
The Master node is still in "NotReady" status after installing Flannel.
Expected result:
Master Node becomes "Ready" after installing Flannel.
Background:
I am following this guide when installing Flannel
My concern is that I am using Kubelet v1.17.2 by default that just came out like last month (Can anyone confirm if v1.17.2 works with Flannel?"
Here is the output after running the command on master node: kubectl describe node machias
Name: machias
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=machias
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"be:78:65:7f:ae:6d"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.122.172
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 15 Feb 2020 01:00:01 -0500
Taints: node.kubernetes.io/not-ready:NoExecute
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: machias
AcquireTime: <unset>
RenewTime: Sat, 15 Feb 2020 13:54:56 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sat, 15 Feb 2020 13:54:52 -0500 Sat, 15 Feb 2020 00:59:54 -0500 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 192.168.122.172
Hostname: machias
Capacity:
cpu: 2
ephemeral-storage: 38583284Ki
hugepages-2Mi: 0
memory: 4030364Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 35558354476
hugepages-2Mi: 0
memory: 3927964Ki
pods: 110
System Info:
Machine ID: 20cbe0d737dd43588f4a2bccd70681a2
System UUID: ee9bc138-edee-471a-8ecc-f1c567c5f796
Boot ID: 0ba49907-ec32-4e80-bc4c-182fccb0b025
Kernel Version: 5.3.5-200.fc30.x86_64
OS Image: Fedora 30 (Workstation Edition)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.2
Kube-Proxy Version: v1.17.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-machias 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-apiserver-machias 250m (12%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-controller-manager-machias 200m (10%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-flannel-ds-amd64-rrfht 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 12h
kube-system kube-proxy-z2q7d 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12h
kube-system kube-scheduler-machias 100m (5%) 0 (0%) 0 (0%) 0 (0%) 12h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 650m (32%) 100m (5%)
memory 50Mi (1%) 50Mi (1%)
ephemeral-storage 0 (0%) 0 (0%)
Events: <none>
And the following command: kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-7nz46 0/1 Pending 0 12h
kube-system coredns-6955765f44-xk5r2 0/1 Pending 0 13h
kube-system etcd-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-apiserver-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-controller-manager-machias.cs.unh.edu 1/1 Running 0 13h
kube-system kube-flannel-ds-amd64-rrfht 1/1 Running 0 12h
kube-system kube-flannel-ds-amd64-t7p2p 1/1 Running 0 12h
kube-system kube-proxy-fnn78 1/1 Running 0 12h
kube-system kube-proxy-z2q7d 1/1 Running 0 13h
kube-system kube-scheduler-machias.cs.unh.edu 1/1 Running 0 13h
Thank you for your help!
I've reproduced your scenario using the same versions you are using to make sure these versions work with Flannel.
After testing it I can affirm that there is no problem with the version you are using.
I created it following these steps:
Ensure iptables tooling does not use the nftables backend Source
update-alternatives --set iptables /usr/sbin/iptables-legacy
Installing runtime
sudo yum remove docker docker-common docker-selinux docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce-19.03.5-3.el7
sudo systemctl start docker
Installing kubeadm, kubelet and kubectl
sudo su -c "cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF"
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet-1.17.2-0 kubeadm-1.17.2-0 kubectl-1.17.2-0 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
Note:
Setting SELinux in permissive mode by running setenforce 0 and sed ... effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call modprobe br_netfilter.
Initialize cluster with Flannel CIDR
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Add Flannel CNI
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
By default, your cluster will not schedule Pods on the control-plane node for security reasons. If you want to be able to schedule Pods on the control-plane node, e.g. for a single-machine Kubernetes cluster for development, run:
kubectl taint nodes --all node-role.kubernetes.io/master-
As can be seen, my master node is Ready. Please, follow this How-to and let me know if you can achieve your desired state.
$ kubectl describe nodes
Name: kubeadm-fedora
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=kubeadm-fedora
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"8e:7e:bf:d9:21:1e"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.128.15.200
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 17 Feb 2020 11:31:59 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: kubeadm-fedora
AcquireTime: <unset>
RenewTime: Mon, 17 Feb 2020 11:47:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:31:51 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 17 Feb 2020 11:47:37 +0000 Mon, 17 Feb 2020 11:32:32 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.128.15.200
Hostname: kubeadm-fedora
Capacity:
cpu: 2
ephemeral-storage: 104844988Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7493036Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 96625140781
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7390636Ki
pods: 110
System Info:
Machine ID: 41689852cca44b659f007bb418a6fa9f
System UUID: 390D88CD-3D28-5657-8D0C-83AB1974C88A
Boot ID: bff1c808-788e-48b8-a789-4fee4e800554
Kernel Version: 3.10.0-1062.9.1.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.2
Kube-Proxy Version: v1.17.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6955765f44-d9fb4 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 15m
kube-system coredns-6955765f44-l7xrk 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 15m
kube-system etcd-kubeadm-fedora 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-apiserver-kubeadm-fedora 250m (12%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-controller-manager-kubeadm-fedora 200m (10%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-flannel-ds-amd64-v6m2w 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 15m
kube-system kube-proxy-d65kl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15m
kube-system kube-scheduler-kubeadm-fedora 100m (5%) 0 (0%) 0 (0%) 0 (0%) 15m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 100m (5%)
memory 190Mi (2%) 390Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 16m (x6 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 16m (x5 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 16m (x5 over 16m) kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 16m kubelet, kubeadm-fedora Updated Node Allocatable limit across pods
Normal Starting 15m kubelet, kubeadm-fedora Starting kubelet.
Normal NodeHasSufficientMemory 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 15m kubelet, kubeadm-fedora Updated Node Allocatable limit across pods
Normal Starting 15m kube-proxy, kubeadm-fedora Starting kube-proxy.
Normal NodeReady 15m kubelet, kubeadm-fedora Node kubeadm-fedora status is now: NodeReady
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubeadm-fedora Ready master 17m v1.17.2
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-d9fb4 1/1 Running 0 17m
kube-system coredns-6955765f44-l7xrk 1/1 Running 0 17m
kube-system etcd-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-apiserver-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-controller-manager-kubeadm-fedora 1/1 Running 0 17m
kube-system kube-flannel-ds-amd64-v6m2w 1/1 Running 0 17m
kube-system kube-proxy-d65kl 1/1 Running 0 17m
kube-system kube-scheduler-kubeadm-fedora 1/1 Running 0 17m
PodCIDR value is showing as 10.244.0.0/24.For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.

How to fix 'container runtime is down,PLEG is not healthy'

I have aks with one kubernetes cluster having 2 nodes. Each node has about 6-7 pod running with 2 containers for each pod. One container is my docker image and the other is created by istio for its service mesh. But after about 10 hours the nodes become 'not ready' and the node describe shows me 2 errors:
1.container runtime is down,PLEG is not healthy: pleg was lastseen active 1h32m35.942907195s ago; threshold is 3m0s.
2.rpc error: code = DeadlineExceeded desc = context deadline exceeded,
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
When I restart the node, it works fine but, the node goes back to 'NOT READY' after a while. Started facing this issue since adding in istio, but could not find any documents relating the two. Next step is to try and upgrade kubernetes
The node describe log:
Name: aks-agentpool-22124581-0
Roles: agent
Labels: agentpool=agentpool
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=Standard_B2s
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=eastus
failure-domain.beta.kubernetes.io/zone=1
kubernetes.azure.com/cluster=MC_XXXXXXXXX
kubernetes.io/hostname=aks-XXXXXXXXX
kubernetes.io/role=agent
node-role.kubernetes.io/agent=
storageprofile=managed
storagetier=Premium_LRS
Annotations: aks.microsoft.com/remediated=3
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 25 Oct 2018 14:46:53 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 25 Oct 2018 14:49:06 +0000 Thu, 25 Oct 2018 14:49:06 +0000 RouteCreated RouteController created a route
OutOfDisk False Wed, 19 Dec 2018 19:28:55 +0000 Wed, 19 Dec 2018 19:27:24 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Wed, 19 Dec 2018 19:28:55 +0000 Wed, 19 Dec 2018 19:27:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 19 Dec 2018 19:28:55 +0000 Wed, 19 Dec 2018 19:27:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 19 Dec 2018 19:28:55 +0000 Thu, 25 Oct 2018 14:46:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 19 Dec 2018 19:28:55 +0000 Wed, 19 Dec 2018 19:27:24 +0000 KubeletNotReady container runtime is down,PLEG is not healthy: pleg was lastseen active 1h32m35.942907195s ago; threshold is 3m0s
Addresses:
Hostname: aks-XXXXXXXXX
Capacity:
cpu: 2
ephemeral-storage: 30428648Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4040536Ki
pods: 110
Allocatable:
cpu: 1940m
ephemeral-storage: 28043041951
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3099480Ki
pods: 110
System Info:
Machine ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
System UUID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Boot ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Kernel Version: 4.15.0-1035-azure
OS Image: Ubuntu 16.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://Unknown
Kubelet Version: v1.11.3
Kube-Proxy Version: v1.11.3
PodCIDR: 10.244.0.0/24
ProviderID: azure:///subscriptions/9XXXXXXXXXXX/resourceGroups/MC_XXXXXXXXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.Compute/virtualMachines/aks-XXXXXXXXXXXX
Non-terminated Pods: (42 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default emailgistics-graph-monitor-6477568564-q98p2 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-message-handler-7df4566b6f-mh255 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-reports-aggregator-5fd96b94cb-b5vbn 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-rules-844b77f46-5lrkw 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-scheduler-754884b566-mwgvp 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-subscription-token-manager-7974558985-f2t49 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default mollified-kiwi-cert-manager-665c5d9c8c-2ld59 0 (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system grafana-59b787b9b-dzdtc 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-citadel-5d8956cc6-x55vk 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-egressgateway-f48fc7fbb-szpwp 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-galley-6975b6bd45-g7lsc 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-ingressgateway-c6c4bcdbf-bbgcw 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-pilot-d9b5b9b7c-ln75n 510m (26%) 0 (0%) 2Gi (67%) 0 (0%)
istio-system istio-policy-6b465cd4bf-92l57 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-policy-6b465cd4bf-b2z85 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-policy-6b465cd4bf-j59r4 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-policy-6b465cd4bf-s9pdm 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-sidecar-injector-575597f5cf-npkcz 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-9794j 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-g7gh5 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-gd88n 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-px8qb 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-xzslh 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-tracing-7596597bd7-hjtq2 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system prometheus-76db5fddd5-d6dxs 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system servicegraph-758f96bf5b-c9sqk 10m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system addon-http-application-routing-default-http-backend-5ccb95zgfm8 10m (0%) 10m (0%) 20Mi (0%) 20Mi (0%)
kube-system addon-http-application-routing-external-dns-59d8698886-h8xds 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system addon-http-application-routing-nginx-ingress-controller-ff49qc7 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system heapster-5d6f9b846c-m4kfp 130m (6%) 130m (6%) 230Mi (7%) 230Mi (7%)
kube-system kube-dns-v20-7c7d7d4c66-qqkfm 120m (6%) 0 (0%) 140Mi (4%) 220Mi (7%)
kube-system kube-dns-v20-7c7d7d4c66-wrxjm 120m (6%) 0 (0%) 140Mi (4%) 220Mi (7%)
kube-system kube-proxy-2tb68 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-svc-redirect-d6gqm 10m (0%) 0 (0%) 34Mi (1%) 0 (0%)
kube-system kubernetes-dashboard-68f468887f-l9x46 100m (5%) 100m (5%) 50Mi (1%) 300Mi (9%)
kube-system metrics-server-5cbc77f79f-x55cs 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system omsagent-mhrqm 50m (2%) 150m (7%) 150Mi (4%) 300Mi (9%)
kube-system omsagent-rs-d688cdf68-pjpmj 50m (2%) 150m (7%) 100Mi (3%) 500Mi (16%)
kube-system tiller-deploy-7f4974b9c8-flkjm 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system tunnelfront-7f766dd857-kgqps 10m (0%) 0 (0%) 64Mi (2%) 0 (0%)
kube-systems-dev nginx-ingress-dev-controller-7f78f6c8f9-csct4 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-systems-dev nginx-ingress-dev-default-backend-95fbc75b7-lq9tw 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1540m (79%) 540m (27%)
memory 2976Mi (98%) 1790Mi (59%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ContainerGCFailed 48m (x43 over 19h) kubelet, aks-agentpool-22124581-0 rpc error: code = DeadlineExceeded desc = context deadline exceeded
Warning ImageGCFailed 29m (x57 over 18h) kubelet, aks-agentpool-22124581-0 failed to get image stats: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Warning ContainerGCFailed 2m (x237 over 18h) kubelet, aks-agentpool-22124581-0 rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
General deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: emailgistics-pod
spec:
minReadySeconds: 10
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"ebf16d3ea0236e4b5cb4d3fc0f01da62e2e6265d005e58f8f6bd43a4fb672fdd","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: emailgistics-pod
spec:
containers:
- image: xxxxxxxxxxxxxxxxxxxxx/emailgistics_pod:xxxxxx
imagePullPolicy: Always
name: emailgistics-pod
ports:
- containerPort: 80
resources: {}
- args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- emailgistics-pod
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15005
- --discoveryRefreshDelay
- 1s
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --proxyAdminPort
- "15000"
- --controlPlaneAuthPolicy
- MUTUAL_TLS
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
- name: ISTIO_METAJSON_LABELS
value: |
{"app":"emailgistics-pod"}
image: docker.io/istio/proxyv2:1.0.4
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
resources:
requests:
cpu: 10m
securityContext:
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
imagePullSecrets:
- name: ga.secretname
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- "80"
- -d
- ""
image: docker.io/istio/proxy_init:1.0.4
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
Currently this is a known bug and no real fix has been created to normalize nodes behavior.
Inspect below urls:
https://github.com/kubernetes/kubernetes/issues/45419
https://github.com/kubernetes/kubernetes/issues/61117
https://github.com/Azure/AKS/issues/102
Hope soon we will have a solution.

Phusion Passenger Rails not balancing request across workers

Below is the output when I run passenger-status on the server
Requests in queue: 0
* PID: 1821 Sessions: 0 Processed: 2971 Uptime: 15m 11s
CPU: 14% Memory : 416M Last used: 0s ago
* PID: 1847 Sessions: 0 Processed: 1066 Uptime: 15m 11s
CPU: 6% Memory : 256M Last used: 2s ago
* PID: 1861 Sessions: 0 Processed: 199 Uptime: 15m 11s
CPU: 1% Memory : 238M Last used: 3s ago
* PID: 1875 Sessions: 0 Processed: 37 Uptime: 15m 10s
CPU: 0% Memory : 196M Last used: 15s ago
* PID: 1900 Sessions: 0 Processed: 7 Uptime: 15m 10s
CPU: 0% Memory : 136M Last used: 33s ago
* PID: 1916 Sessions: 0 Processed: 4 Uptime: 15m 10s
CPU: 0% Memory : 126M Last used: 33s ago
* PID: 1932 Sessions: 0 Processed: 1 Uptime: 15m 10s
CPU: 0% Memory : 132M Last used: 14m 44s ago
* PID: 1946 Sessions: 0 Processed: 0 Uptime: 15m 10s
CPU: 0% Memory : 68M Last used: 15m 10s ago
* PID: 1962 Sessions: 0 Processed: 0 Uptime: 15m 9s
CPU: 0% Memory : 53M Last used: 15m 9s ago
* PID: 1980 Sessions: 0 Processed: 0 Uptime: 15m 9s
CPU: 0% Memory : 53M Last used: 15m 9s ago
The stack we are running is Nginx + Passenger + Rails.
My concern here is as the docs say passenger must be distributing load across the workers present spawned by itself, but as we can see from the logs, only the top 2 workers get all the requests, the rest are just idle.
Also with time the Memory usage by top workers increases.
Is this an expected behaviour?
How can I rectify this, and can I improve performance in any way?
Also my passenger conf is below
passenger_max_pool_size 20;
passenger_min_instances 10;
passenger_max_instances_per_app 0;
passenger_pre_start <api-endpoint>;
passenger_pool_idle_time 0;
passenger_max_request_queue_size 0;
Silly me, I made a comment a few minutes ago and now I found the answer.
Summary: Passenger uses a simple algorithm to fill up the top request whenever possible instead of using round robin, nothing wrong with your application.
The link explains most of it.
https://www.phusionpassenger.com/library/indepth/ruby/request_load_balancing.html#traffic-may-appear-unbalanced-between-processes

Passenger instance becomes idle, starts chewing up CPU

Here's the passenger-status output from one of my affected production instances:
Version : 4.0.53
Date : 2015-01-07 00:59:55 +0000
Instance: 6919
----------- General information -----------
Max pool size : 8
Processes : 8
Requests in top-level queue : 0
----------- Application groups -----------
/home/app/web#default:
App root: /home/app/web
Requests in queue: 0
* PID: 7009 Sessions: 1 Processed: 1607 Uptime: 53m 19s
CPU: 7% Memory : 217M Last used: 2m 51s ago
* PID: 7021 Sessions: 1 Processed: 1823 Uptime: 53m 19s
CPU: 6% Memory : 217M Last used: 2s ago
* PID: 7032 Sessions: 0 Processed: 2241 Uptime: 53m 19s
CPU: 7% Memory : 218M Last used: 2s ago
* PID: 7044 Sessions: 1 Processed: 1539 Uptime: 53m 19s
CPU: 15% Memory : 209M Last used: 14m 27s ago
* PID: 7057 Sessions: 0 Processed: 1549 Uptime: 53m 19s
CPU: 5% Memory : 217M Last used: 1s ago
* PID: 7074 Sessions: 1 Processed: 554 Uptime: 53m 18s
CPU: 41% Memory : 220M Last used: 41m 37s ago
* PID: 7085 Sessions: 1 Processed: 1564 Uptime: 53m 18s
CPU: 10% Memory : 219M Last used: 7m 5s ago
* PID: 7106 Sessions: 1 Processed: 14 Uptime: 53m 17s
CPU: 56% Memory : 174M Last used: 52m 30s ago
As you can see, two of the 8 instances have not been used in >40min, and yet they're chewing up most of my machine's CPU. Any tips on how to go about debugging this?

Resources