RubyMine stop button creates zombie Puma servers in Rails app - ruby-on-rails

Related questions/issues not directly addressing this issue
Puma won't die when killing rails server with ctrl-c
Can't kill a process - stopping rails server
Unstoppable server - rails
How to stop/kill server (development) in rubymine
Can't stop rails server
Signal sent by the stopping button
Send SIGINT signal to running programm.
Sending SIGTERM to a server process when running via rails s (in clustered mode) fails to stop the server successfully
Appropriately wait for worker child process when shutting down via SIGTERM
Signal traps exit with status 0 instead of killing own process
Rails server doesn't stop on windows bash killing
Rubymine on Windows with WSL based interpreter: Puma fails to kill the server process when it termiinates
WSL process tree survives a taskkill
[WSL] Rubymine does not kill rails process, need to manually do it.
Specifications
Windows 10 (10.0.17134 Build 17134 with Feature Update 1803) running Ubuntu 18.04.1 LTS via Windows Subsystem for Linux
RubyMine 2018.2.3
Puma 3.12.0
Rails 5.1.4
Ruby 2.5.1p57
Issue
Starting a development environment with the green start arrow in RubyMine starts a Puma server as expected. The server console log shows:
]0;Ubuntu^Z
=> Booting Puma
=> Rails 5.1.4 application starting in development
=> Run `rails server -h` for more startup options
[2331] Puma starting in cluster mode...
[2331] * Version 3.9.1 (ruby 2.5.1-p57), codename: Private Caller
[2331] * Min threads: 5, max threads: 5
[2331] * Environment: development
[2331] * Process workers: 2
[2331] * Preloading application
[2331] * Listening on tcp://127.0.0.1:3000
[2331] Use Ctrl-C to stop
[2331] - Worker 0 (pid: 2339) booted, phase: 0
[2331] - Worker 1 (pid: 2343) booted, phase: 0
(Not sure what's with the broken characters around Ubuntu at the top, but that's another issue...)
I can monitor the server by issuing the ps aux | grep puma command in console. The output is as follows:
samort7 2456 16.9 0.3 430504 66420 tty5 Sl 23:58 0:05 puma 3.9.1 (tcp://127.0.0.1:3000) [rails-sample-app]
samort7 2464 1.6 0.3 849172 54052 tty5 Sl 23:58 0:00 puma: cluster worker 0: 2456 [rails-sample-app]
samort7 2468 1.5 0.3 849176 54052 tty5 Sl 23:58 0:00 puma: cluster worker 1: 2456 [rails-sample-app]
samort7 2493 0.0 0.0 14804 1200 tty4 S 23:59 0:00 grep --color=auto puma
Clicking the red "Stop" button in IntelliJ causes this line to show up in the server console log:
Process finished with exit code 1
However, running ps aux| grep puma again reveals that the puma server is still running:
samort7 2464 0.2 0.3 849172 54340 ? Sl Oct12 0:00 puma: cluster worker 0: 2456 [rails-sample-app]
samort7 2468 0.2 0.3 849176 54332 ? Sl Oct12 0:00 puma: cluster worker 1: 2456 [rails-sample-app]
samort7 2505 0.0 0.0 14804 1200 tty4 S 00:01 0:00 grep --color=auto puma
Repeatedly clicking start and stop will cause more and more zombie processes to be created. These processes can be killed by issuing a pkill -9 -f puma command, but that is less than ideal and defeats the whole purpose of the stop button.
Expected Result
Running the server directly from the terminal with rails s -b 127.0.0.1 -p 3000 starts the server as before and then pressing Ctrl+C gives the following output and doesn't create zombie processes:
[2688] - Gracefully shutting down workers...
[2688] === puma shutdown: 2018-10-13 00:12:00 -0400 ===
[2688] - Goodbye!
Exiting
Analysis
According to the RubyMine documentation, clicking the stop button:
invokes soft kill allowing the application to catch the SIGINT event and perform graceful termination (on Windows, the Ctrl+C event is emulated).
Despite what the documentation claims, this does not appear to be happening. It seems that the stop button is actually issuing a SIGKILL signal, stopping the process without allowing it to gracefully clean up.
I have also noticed that if I close all terminal windows both inside and outside of RubyMine, then open up a new terminal window, the zombie processes are gone.
Question
How can I get RubyMine to issue the correct SIGINT signal upon pressing the red stop button so that the Puma server can be shut down gracefully without creating zombie processes?
For reference, I am experiencing this issue in this commit of my codebase (a chapter from Michael Hartle's Rails Tutorial).

Related

What are benefits of using puma / systemd with socket activation?

I'm using Ruby on Rails 6 with Puma server managed by systemd on Ubuntu 20.04.
On the official Puma website, two setups types are given:
simple
with socket activation
There it says:
systemd and puma also support socket activation, where systemd opens the listening socket(s) in advance and provides them to the puma master process on startup. Among other advantages, this keeps listening sockets open across puma restarts and achieves graceful restarts, including when upgraded puma, and is compatible with both clustered mode and application preload.
[emphasis mine]
I have two questions:
What's a "graceful" restart?
What are the "other advantages"?
One other advantage would be the use of 'system ports', e.g. port 80 while running puma as a non-root user.

Puma does not seem to be opening the port it says it is

I'm developing a Rails app using Puma as the server on my local machine.
When I start the local server, the logs clearly indicate that Puma is opening a connection on localhost:3011:
=> Booting Puma
=> Rails 5.0.4 application starting in development on http://localhost:3011
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.9.1 (ruby 2.3.4-p301), codename: Private Caller
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
But when I run netstat to see the open port, port 3011 does not seem to be active:
kevin#kevin-devbox:~/Programming$ netstat -an | grep "3011"
(there is no output)
kevin#kevin-devbox:~/Programming$
How do I go about figuring out why my local server isn't opening the port it says it is?
The statements prefixed with => are coming from Rails, which sometimes doesn't get the right information. You can see the source for that log statement here.
What really matters for binding ports is puma's bindings, which are revealed lower in the logs. You can see the source for that log here. You can make these work in a couple different ways:
Add the -p flag to the server command: rails s -p 3000
Add bind 'tcp://0.0.0.0:3000' to config/puma.rb.

Heroku Rails 4 Puma app spawning extra instance

I'm running a basic Rails 4 (ruby 2.1.4) app on Heroku with a Puma config as follows:
workers Integer(ENV['PUMA_WORKERS'] || 1)
threads Integer(ENV['MIN_THREADS'] || 6), Integer(ENV['MAX_THREADS'] || 6)
I currently do not have any ENV vars set so I should be defaulting to 1 worker.
The problem is, that while investigating a potential memory leak, it appears that 2 'instances' of my web.1 dyno are running, at least according to NewRelic.
I have heroku labs:enable log-runtime-metrics enabled and it shows my memory footprint at ~400MB. On NewRelic it shows my footprint at ~200MB AVG across 2 'instances'.
heroku:ps shows:
=== web (1X): `bundle exec puma -C config/puma.rb`
web.1: up 2014/10/30 13:49:29 (~ 4h ago)
So why would NewRelic think I have 2 instances running? If I do a heroku:restart NewRelic will see only 1 instance for awhile and then bump up to 2. Is this something Heroku is doing but not reporting to me, or is it a Puma thing even though workers should be set to 1.
See the Feb 17, 2015 release of New Relic 3.10.0.279 which addresses this specific issue when tracking Puma instances. I'm guessing since your app is running on Heroku, you have preload_app! set in your Puma config so this should apply.
From the release notes:
Metrics no longer reported from Puma master processes.
When using Puma's cluster mode with the preload_app! configuration directive, the agent will no longer start its reporting thread in the Puma master process. This should result in more accurate instance counts, and more accurate stats on the Ruby VMs page (since the master process will be excluded).
I'm testing the updated on a project with a simular issue and it seems to be reporting more accurately.
It's because Puma has always 1 master process from which all the workers are spawned.
So, the instance count will come from the following:
1 (master process) + <N_WORKER>

when using DelayedJobs, info about worker thread doesn't print in console until foreman is killed

I've noticed this problem for the past couple weeks when developing using the DelayedJobs gem.
I start a Rails server in a terminal and I get info back from both the web and work threads:
foreman start
22:10:31 web.1 | started with pid 10128
22:10:31 worker.1 | started with pid 10129
However, after this point only the web thread prints information to the console. It's not until I kill foreman that I get the complete dump of everything having to do with the worker thread during the entire server execution all at once on the console.
Any way to get the worker thread information printed out to the console in real time during development?
Thanks!
According to status on this issue, the newest version of foreman should be flushing logs on its own.

Reducing Memory Usage in Spree

I checked my applications, and they're running a huge amount of memory which is crashing my server.
Here's my ps :
RSS COMMAND
1560 sshd: shadyfront#pts/0
1904 -bash
1712 PassengerNginxHelperServer /home/shadyfront/webapps/truejersey/gems/gems/p
8540 Passenger spawn server
612 nginx: master process /home/shadyfront/webapps/truejersey/nginx/sbin/nginx
1368 nginx: worker process
94796 Rails: /home/shadyfront/webapps/truejersey/True-Jersey
1580 PassengerNginxHelperServer /home/shadyfront/webapps/age_of_revolt/gems/gem
8152 Passenger spawn server
548 nginx: master process /home/shadyfront/webapps/age_of_revolt/nginx/sbin/ng
1240 nginx: worker process
92196 Rack: /home/shadyfront/webapps/age_of_revolt/Age-of-Revolt
904 ps -u shadyfront -o rss,command
Is this abnormally large for an e-commerce application?
If you are on linux, You can use
ulimit
http://ss64.com/bash/ulimit.html
Not sure why it is eating your memory though.
If your using a 64-bit OS then it's fairly normal.
RSS COMMAND
89824 Rack: /var/www/vhosts/zmdev.net/zmdev # RefineryCMS on Passenger
148216 thin server (0.0.0.0:5000) # Redmine
238856 thin server (0.0.0.0:3000) # Spree after a couple of weeks
140260 thin server (0.0.0.0:3000) # Spree after a fresh reboot
All of these are 64-bit OSes, there are significant memory reductions using 32-bit OS
Here's the exact same Spree application running Webrick in my dev environment using 32-but Ubuntu
RSS COMMAND
58904 /home/chris/.rvm/rubies/ruby-1.9.2-p180/bin/ruby script/rails s

Resources