Starting rails server as deamon doesn't trigger worker cluster - ruby-on-rails

I want to start the rails server in production mode as a deamon running a worker cluster. When I start my rails program everything works as expected.
rails s -e production -b 0.0.0.0
=> Booting Puma
=> Rails 5.0.0.1 application starting in production on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
[12340] Puma starting in cluster mode...
[12340] * Version 3.4.0 (ruby 2.3.0-p0), codename: Owl Bowl Brawl
[12340] * Min threads: 5, max threads: 5
[12340] * Environment: production
[12340] * Process workers: 3
[12340] * Preloading application
[12340] * Listening on tcp://0.0.0.0:3000
[12340] Use Ctrl-C to stop
[12340] - Worker 0 (pid: 12347) booted, phase: 0
[12340] - Worker 1 (pid: 12349) booted, phase: 0
[12340] - Worker 2 (pid: 12353) booted, phase: 0
however when I add the -d rails starts in single mode, confirmed by checking running processes
rails s -e production -b 0.0.0.0 -d
=> Booting Puma
=> Rails 5.0.0.1 application starting in production on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
checking running processes confirms only one instance is running, not the clustered mode expected.
So, how do I correctly launch with workers as a deamon process?
Any help is much appreciated.
NOTE: I am also running puma_worker_killer for rolling restarts in case that helps.
rails (5.0.0.1)
puma (3.4.0)
puma_worker_killer (0.1.0)

According to the Puma docs, it's recommended that you start with bundle exec puma.
You can then start a cluster like this: puma -t 8:32 -w 3. Where -t is the min:max number of threads and -w is the number of workers.

Related

Workers constantly rebooting when killed

I'm starting a server on 3000 and it always boots two workers. I kill the pids and then new pids pop up.
I've tried lsof, killing pids and even running a server parent killer program.
=> Booting Puma
=> Rails 5.1.6.1 application starting in development
=> Run `rails server -h` for more startup options
[23820] Puma starting in cluster mode...
[23820] * Version 3.9.1 (ruby 2.3.4-p301), codename: Private Caller
[23820] * Min threads: 5, max threads: 5
[23820] * Environment: development
[23820] * Process workers: 2
[23820] * Preloading application
[23820] * Listening on tcp://0.0.0.0:3000
[23820] Use Ctrl-C to stop
[23820] - Worker 0 (pid: 23982) booted, phase: 0
[23820] - Worker 1 (pid: 23983) booted, phase: 0

Puma starts but does not create the processes

I try run my puma application server with the following command
RAILS_ENV=production puma -C config/puma.rb -e production -d
and then i see that everthing is going fine..
production [3111] Puma starting in cluster mode... [3111] * Version
3.12.0 (ruby 2.2.4-p230), codename: Llamas in Pajamas [3111] * Min threads: 1, max threads: 6 [3111] * Environment: production [3111] *
Process workers: 2 [3111] * Phased restart available [3111] *
Listening on
unix:///home/joaquin/Documents/ecommerce/vaypol-ecommerce/shared/sockets/puma.sock
[3111] * Daemonizing...
but at real the processes never startup if i try to check with ps aux | grep puma
so my config/puma.rb
# Change to match your CPU core count
workers 2
# Min and Max threads per worker
threads 1, 6
daemonize true
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
# Set up socket location
bind "unix://#{shared_dir}/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
activate_control_app
on_worker_boot do
require "active_record"
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{app_dir}/config/database.yml")[rails_env])
What i missing up? Thanks at advance
Puma worker processes are forked from the original, parent, process, which is a ruby process.
Consider testing for processes named ruby rather than puma... i.e. (using your approach):
ps aux | grep ruby
The problem apparently was with the puma socket, my nginx was not be able to bind it.
upstream myapp_puma {
server unix:///home/ubuntu/vaypol-ecommerce/shared/sockets/puma.sock fail_timeout=0;
}
please be sure that these fold in shared , "log \ pids \ sockets", who will be used by puma. if these fold is not found, please mkdir to create them

How come running `rails s` as daemon doesn't start Puma?

When I run rails s -e production -p 9292 (normal case), I get:
=> Booting Puma
=> Rails 5.1.1 application starting in production on http://0.0.0.0:9292
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.8.2 (ruby 2.3.0-p0), codename: Sassy Salamander
* Min threads: 5, max threads: 5
* Environment: production
* Listening on tcp://0.0.0.0:9292
Use Ctrl-C to stop
When I run rails s -d -e production -p 9292 (as daemon), I get:
=> Booting Puma
=> Rails 5.1.1 application starting in production on http://0.0.0.0:9292
=> Run `rails server -h` for more startup options
That's it. I would need to run bundle exec puma -e production -p 9292 --pidfile tmp/pids/puma.pid -d to get the 2nd part:
Puma starting in single mode...
...
Also where are my Puma logs? I see a blank production.log in my log folder and no other log files.
Background context: When I run curl 0.0.0.0:9292 after running both rails and puma as daemons, I get the error An unhandled lowlevel error occurred. The application logs may have details.
rails s -e production -p 9292 -d
Ah, seems Puma only cares about RAILS_ENV when used with capistrano. Can you use RACK_ENV or use -e instead? That should work:
RACK_ENV=production bundle exec puma -p 3000
or
bundle exec puma -p 3000 -e production
See here
Hope to help
to kill the server
kill cat tmp/pids/server.pid
rails s -e production -p 9292 -d
For Puma Versions Below 5, we can use -d option to start in background.
puma -e production -p 4132 -C config/puma.rb -d
But Puma gem does n't support -d option in versions above 5.
In version 5.0 the authors of the popular Ruby web server Puma chose
to remove the daemonization support from Puma, because the code wasn't
wall maintained, and because other and perhaps better options exist
(such as systemd, etc), not to mention many people have switched to
Kubernetes and Docker, where you want to start all servers on the
foreground
You can use this gem for starting in background.
https://github.com/kigster/puma-daemon
https://rubygems.org/gems/puma-daemon/versions/0.1.2

Trouble deploying rails into amazon ec2 - URI::InvalidURIError

On my amazon EC2 server, after I install ruby/rails/rbenv I run into an URI::InvalidURIError error. I'm not sure if I might have an issue with the way I installed rbenv.
rails s -p 3000 -b 0.0.0.0
=> Booting Puma
=> Rails 5.0.1 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.6.2 (ruby 2.3.1-p112), codename: Sleepy Sunday Serenity
* Min threads: 5, max threads: 5
* Environment: development
Exiting
home/ec2-user/.rbenv/versions/2.3.1/lib/ruby/2.3.0/uri/rfc3986_parser.rb:21:in `split': URI must be ascii only "tcp://0.0.0.0\u{feff}:3000" (URI::InvalidURIError)
from /home/ec2-user/.rbenv/versions/2.3.1/lib/ruby/2.3.0/uri/rfc3986_parser.rb:73:in `parse'
from /home/ec2-user/.rbenv/versions/2.3.1/lib/ruby/2.3.0/uri/common.rb:227:in `parse'
Somehow, you managed to add an invisible <U+FEFF> character at the end of your command-line:
rails s -p 3000 -b 0.0.0.0[<U+FEFF> is here]
Remove this character from your command-line, and your server should boot fine:
rails s -p 3000 -b 0.0.0.0

How to run multiple rails environments in parallel?

I would like to run one and the same project twice on the same server. So I defined two environments alpha and beta for this purpose.
alpha should run on port 3000
beta should run on port 4000
Then I try to start the server twice:
$ ruby bin/rails server -b 0.0.0.0 -p 3000 -e alpha --pid tmp/pids/server-alpha.pid
$ ruby bin/rails server -b 0.0.0.0 -p 4000 -e beta --pid tmp/pids/server-beta.pid
Unfortunately one of those servers (the second to start) stops when it recognizes, that there is another instance.
Environment alpha starts:
=> Booting Puma
=> Rails 5.0.0.1 application starting in alpha on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.6.0 (ruby 2.3.1-p112), codename: Sleepy Sunday Serenity
* Min threads: 5, max threads: 5
* Environment: alpha
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
Environment beta starts:
=> Booting Puma
=> Rails 5.0.0.1 application starting in beta on http://0.0.0.0:4000
=> Run `rails server -h` for more startup options
Puma starting in single mode...
* Version 3.6.0 (ruby 2.3.1-p112), codename: Sleepy Sunday Serenity
* Min threads: 5, max threads: 5
* Environment: beta
* Listening on tcp://0.0.0.0:4000
Use Ctrl-C to stop
Environment alpha restarts (don't know why!):
* Restarting...
=> Booting Puma
=> Rails 5.0.0.1 application starting in alpha on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
A server is already running. Check tmp/pids/server-alpha.pid.
Exiting
Obviously the pid file still exists. But how can I avoid a restart of the server when I start another one? How can I tell rails to delete the pidfile on restart? Or how else could I handle this problem?
You probable have plugin :tmp_restart in your config/puma.rb. Everytime tmp/restart.txt is touched (which is everytime a server starts), the other server restarts.
Just comment the line and it works (you won't be able to restart your rails server by touching tmp/restart.txt anymore).
I'm not sure this will work but try using = after --pid like this
$ ruby bin/rails server -b 0.0.0.0 -p 3000 -e alpha --pid=tmp/pids/server-alpha.pid
$ ruby bin/rails server -b 0.0.0.0 -p 4000 -e beta --pid=tmp/pids/server-beta.pid

Resources