rails puma - command without config file - path to error log? - ruby-on-rails

I am running puma like this:
puma -e production -b unix://blahblah.sock
setting up configs etc going to take a little more time, so I want to put into production using this. Ofcourse, using nginx.
Where can I find error logs ?
Or any other command to include for error logs to work ?

It's a good practice to create config/puma.rb file, where you can manage related configuration settings.
Set up puma
Puma can be configured just by providing command-line arguments, but it's more convenient to bake the configuration settings into a configuration file and then provide that file with a single -C command line directive. A simple puma.rb that works with MRI Ruby is:
#!/usr/bin/env puma
# start puma with:
# RAILS_ENV=production bundle exec puma -C ./config/puma.rb
application_path = Rails.root
railsenv = 'production'
directory application_path
environment railsenv
daemonize true
pidfile "#{application_path}/tmp/pids/puma-#{railsenv}.pid"
state_path "#{application_path}/tmp/pids/puma-#{railsenv}.state"
stdout_redirect
"#{application_path}/log/puma-#{railsenv}.stdout.log",
"#{application_path}/log/puma-#{railsenv}.stderr.log"
threads 0, 16
bind "unix://#{application_path}/tmp/sockets/#{railsenv}.socket"
For more info please refer - Puma and Nginx production stack

Related

RoR in Windows not working for pre-existing app built with RoR in linux

I've installed Ruby on Rails using RailsInstaller and also postgresql in Windows 8. I'm trying to run rails server using files for a pre existing app but I'm getting the error 'Worker mode not supported on JRuby or Windows'.
In my config/puma.rb file i've set workers to 0, then get an error about daemon mode not supported on windows. basically each time i change something i get more errors.
I've fixed up environment variables, gems, etc (like in other posts) such as this Cannot install Puma gem on Ruby on Rails. there any hope of running a pre-existing RoR app built in linux on a windows machine?
When I run rails server for the RoR 'blog' example it works fine, so I know that RoR is definitely working in windows!
This is my -'de-identified' config/puma.rb file. Is it because in windows I have no /var/app folder?? I've played around with directories etc to no avail.
`
#!/usr/bin/env puma
# start puma with:
# RAILS_ENV=production bundle exec puma -C ./config/puma.rb
workers 0
theident = 'nameofthing'
application_path = '/var/app/'+ theident + '.address.com.au/current'
railsenv = 'production'
directory application_path
environment railsenv
daemonize false
pidfile "#{application_path}/tmp/pids/puma-#{railsenv}.pid"
state_path "#{application_path}/tmp/pids/puma-#{railsenv}.state"
stdout_redirect"#{application_path}/log/puma-#{theident}.log"
threads 0, 16
bind "unix:///var/run/puma/" + theident + "_app.sock" `
I have changed those directories to the current path and now running 'rails server' starts going , but localhost:3000 is a page not working. I am getting errors around SIGUSR1 not working , SIGUSR2 not working ,etc
The "workers" method isn't supported for JRuby nor Windows, so the best solution would be to remove the line from puma.rb causing the error. In my case I removed;
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
So remaining I have;
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
It might be different for you, but the specific line will start with "workers"

Puma unable to find production environment configuration

I have a rails (4.2.4) app running with puma. The documentation says:
If an environment is specified, either via the -e and --environment flags, or through the RACK_ENV environment variable, the default file location will be config/puma/environment_name.rb.
And so that's where my configuration files are.
In my development environment, puma starts up just fine. Here is the configuration:
workers 1
threads 1, 6
environment 'development'
activate_control_app
My production environment has problems however. Here's the configuration:
workers 1
threads 1, 6
environment 'production'
app_dir = File.expand_path('../../..', __FILE__)
bind "unix://#{app_dir}/tmp/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log",
"#{app_dir}/log/puma.stderr.log",
true
pidfile "#{app_dir}/tmp/pids/puma.pid"
state_path "#{app_dir}/tmp/pids/puma.state"
activate_control_app
daemonize true
This is obviously more complicated, mostly to make use of sockets, logging, and daemonization. I know it works however, because it starts just fine with the following command:
$ bundle exec pumactl start
So far so good. But if I want to stop or restart the server like the above command, I get the following message:
$ bundle exec pumactl stop
Neither pid nor control url available
If I specify the location of the configuration file it works:
$ bundle exec pumactl -F config/puma/production.rb stop
Why do I need to specify the configuration file for stop and restart but not start?

`rails server puma` vs. `puma`

Some guides (example) recommend this to start one's webserver
bundle exec rails server puma
But I've always just started the server with puma directly
bundle exec puma
Does something special happening when firing up puma (or any other server) via rails server?
When you use rails s <server>, the server is launched from the Rails command and is aware of the Rails environment.
This makes possible, for example, to use any of the features and flags offered by the rails server command.
rails s --help
Usage: rails server [mongrel, thin, etc] [options]
-p, --port=port Runs Rails on the specified port.
Default: 3000
-b, --binding=ip Binds Rails to the specified ip.
Default: 0.0.0.0
-c, --config=file Use custom rackup configuration file
-d, --daemon Make server run as a Daemon.
-u, --debugger Enable ruby-debugging for the server.
-e, --environment=name Specifies the environment to run this server under (test/development/production).
Default: development
-P, --pid=pid Specifies the PID file.
Default: tmp/pids/server.pid
-h, --help Show this help message.
For instance, you can attach a debugger to the session passing --debugger or daemonize the server.
The second advantage is that you can version the Puma instance, since you will have to list the gem in the Gemfile. This is already true if you start it with bundle exec like you are doing.
Conversely, when you simply run $ puma (or $ bundle exec puma) you're not passing through the Rails system. Puma will try to find a rack bootstrap file and will use it (it works because Rails provides a config.ru script in the application root.
Generally speaking, there is no real difference if you don't need to pass specific options to the server. I like puma and I tend to use it in some projects even when on production we use Unicorn, thus running $ puma as a standalone command is handy because I don't need to add it to the Gemfile.
However, I would probably go with $ rails s puma if my entire stack uses Puma. This is also the command suggested in the documentation.

Capistrano-unicorn gem getting wrong environment set

I've been using this gem for a while and just took the dive to try deploying an actual staging environment to my staging server, and I ran into issues. Unicorn starts with the command unicorn_rails and -E production despite all the settings being correct afaik.
I noticed in deploy.rb that my unicorn_bin variable was set as unicorn_rails. I took out this setting in my deploy.rb. However unicorn:duplicate still executes the unicorn_rails command, when the default should be unicorn.
My vars are all set to staging in the deploy/staging.rb, as outlined in the multistage setup wiki document, but I noticed -E is still getting set to production.
Relevent info:
Here's my output from my unicorn.log file after a deploy:
executing ["/var/www/apps/myapp/shared/bundle/ruby/2.0.0/bin/unicorn_rails", "-c", "/var/www/apps/bundio/current/config/unicorn.rb", "-E", "production", "-D", {12=>#<Kgio::UNIXServer:/tmp/bundio.socket>, 13=>#<Kgio::TCPServer:fd 13>}] (in /var/www/apps/bundio/current)
Here's the output from cap -T (defaults to staging)
# Environments
rails_env "staging"
unicorn_env "staging"
unicorn_rack_env "staging"
# Execution
unicorn_user nil
unicorn_bundle "/usr/local/rvm/gems/ruby-2.0.0-p247#global/bin/bundle"
unicorn_bin "unicorn"
unicorn_options ""
unicorn_restart_sleep_time 2
# Relative paths
app_subdir ""
unicorn_config_rel_path "config"
unicorn_config_filename "unicorn.rb"
unicorn_config_rel_file_path "config/unicorn.rb"
unicorn_config_stage_rel_file_path "config/unicorn/staging.rb"
# Absolute paths
app_path "/var/www/apps/myapp/current"
unicorn_pid "/var/www/apps/myapp/shared/pids/unicorn.myapp.pid"
bundle_gemfile "/var/www/apps/myapp/current/Gemfile"
unicorn_config_path "/var/www/apps/myapp/current/config"
unicorn_config_file_path "/var/www/apps/myapp/current/config/unicorn.rb"
unicorn_config_stage_file_path
-> "/var/www/apps/myapp/current/config/unicorn/staging.rb"
And another curiousity, the unicorn_rails -E flag should reference the rails environment, whereas the unicorn -E should reference the rack env -- the rack env should only get the values developement and deployment, but it gets set to production, which is a bit strange (see unicorn docs for settings of the RACK_ENV variable.
Any insight into this would be much appreciated. On my staging server, I've also set the RAILS_ENV to staging. I've set up the things for rails for another environment, like adding staging.rb in my environments folder, adding a staging section to database.yml, etc.
Important lines in lib/capistrano-unicorn/config.rb talking about unicorn_rack_env:
_cset(:unicorn_env) { fetch(:rails_env, 'production' ) }
_cset(:unicorn_rack_env) do
# Following recommendations from http://unicorn.bogomips.org/unicorn_1.html
fetch(:rails_env) == 'development' ? 'development' : 'deployment'
end
Thanks in advance.
Ok, after a long time not having the correct environment, I have discovered the issue!
Basically, my init scripts were running BEFORE my capistrano-unicorn bin was doing its thing.
So, make sure that your init.d or upstart scripts to manage Unicorn and its workers are taken into account when capistrano-unicorn is doing the unicorn restart / reload / duplication tasks.
I did not think to look at these scripts when I had to debug the stale pid file / already running / unable to listen on socket errors. But it makes sense, as upstart starts Unicorn when it is not running, and then capistrano-unicorn is also attempting to start Unicorn.
I have now combined these capistrano tasks and hooks with Monit and a Unicorn init script.
Capistrano tasks:
namespace :monit do
desc ' wait 20 seconds '
task :wait_20_seconds do
sleep 20
end
task :monitor_all, :roles => :app do
sudo "monit monitor all"
end
task :unmonitor_all, :roles => :app do
sudo "monit unmonitor all"
end
desc 'monitor unicorn in the monit rc file'
task :monitor_unicorn, :roles => :app do
sudo "monit monitor unicorn"
end
desc 'unmonitor unicorn in the monit rc file'
task :unmonitor_unicorn, :roles => :app do
sudo "monit unmonitor unicorn"
end
end
Capistrano hooks:
after 'deploy:restart', 'unicorn:duplicate' # app preloaded. check https://github.com/sosedoff/capistrano-unicorn section for zero downtime
before 'deploy', "monit:unmonitor_unicorn"
before 'deploy:migrations', "monit:unmonitor_unicorn"
after 'deploy', 'monit:wait_20_seconds'
after "deploy:migrations", "monit:wait_20_seconds"
after 'monit:wait_20_seconds', 'monit:monitor_unicorn'
I use Monit to monitor my unicorn process:
Within /etc/monit/monitrc:
check process unicorn
with pidfile /var/www/apps/my_app/shared/pids/mypid.pid
start program = "/usr/bin/sudo service unicorn start"
stop program = "/usr/bin/sudo service unicorn stop"
Within your init script, you will start the unicorn process with something like:
unicorn_rails -c /var/www/apps/my_app/current/config/unicorn.rb -E staging -D
Make sure the -E flag is set to the correct environment. The capistrano-unicorn gem has directives using :set within deploy.rb which allow you to specify the environment for that unicorn process.

foreman development vs production (rails)

what is the "foreman way" for behaving differently in production vs
development? That is we want foreman start to start up a bunch of
stuff in dev, however in heroku production we don't need it to start
(for example) solr.
I follow the convention;
Procfile defines all processes
.foreman set specific foreman variables
Development:
.env sets environment variables for each developer
.env.example sets defaults for development
foreman start starts all processes
Production:
heroku config sets environment variables
heroku ps:scale turns on or off whichever processes are needed for production
Here's an example from a project.
Procfile:
web: bundle exec unicorn_rails -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
search: bundle exec rake sunspot:solr:run
.env.example:
# default S3 bucket
S3_KEY=keykeykeykeykeykey
S3_SECRET=secretsecretsecret
S3_BUCKET=myapp-development
.env
# developer's private S3 bucket
S3_KEY=mememememememememe
S3_SECRET=mysecretmysecret
S3_BUCKET=myapp-development
.foreman:
# development port is 3000
port: 3000
Foreman takes arguments to use a different file (-d) and arguments to specify what to run. It also supports a .foreman file that allows those args to become default. See http://ddollar.github.com/foreman/ for more info
I've used environment-specific Procfiles before, which is pretty simple and works fine.
Basically you have Procfile.development, Procfile.production, etc. In each you can customize the procs you want to start, then run them via foreman like so:
foreman start -f Procfile.development
Another approach is to reference scripts in your Procfile, and within each script start up the appropriate process based on the environment. The creator of Foreman does this and has an example from his Anvil project your reference.
Our solution was to use a separate job type in our Procfile for dev vs. production. It is not the most DRY method, but it works...
sidekiq: bundle exec sidekiq
sidekiq-prod: bundle exec sidekiq -e production
Then we run foreman specifying the prod job on the production system:
foreman start -c sidekiq-prod=4

Resources