Running Faye in a different environment than Rails - ruby-on-rails

I'm trying to setup Rails 4.2.1 and Faye with CSRF protection. I used the guide at http://faye.jcoglan.com/security/csrf.html to get everything working.
config/csrf_protection.rb
class CsrfProtection
def incoming(message, request, callback)
session_token = request.session['_csrf_token']
message_token = message['ext'] && message['ext'].delete('csrfToken')
unless session_token == message_token
message['error'] = '401::Access denied'
end
callback.call(message)
end
end
config/application.rb
config.middleware.insert_after ActionDispatch::Session::CookieStore,
Faye::RackAdapter,
:extensions => [CsrfProtection.new],
:mount => '/live',
:timeout => 25
config/unicorn.rb
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 15
preload_app true
listen 3000, tcp_nopush: false
stderr_path "log/unicorn.stderr.log"
stdout_path "log/unicorn.stdout.log"
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
And I start unicorn with:
bundle exec unicorn -c config/unicorn.rb
I'm getting the following errors in my log file:
Rack::Lint::LintError: Status must be >=100 seen as integer
ThreadError: deadlock; recursive locking
The errors are being caused by Faye running in development mode. Is it possible, using this setup, to tell Faye to run in a production environment while letting the rest of my application run in development mode?

Related

RuntimeError: The connection cannot be reused in the forked process (ruby-oci8 gem)

I am getting
RuntimeError: The connection cannot be reused in the forked process
from
ruby-oci8 (2.1.3) lib/oci8/cursor.rb:28:in `__initialize'
I am using oracle db for rails application, recently I started deploying rails application with unicorn and nginx and from their onward I am getting this error please help.
I am using ruby 1.9.3 with rails 3.1 and this is my unicorn.rb file
rails_env = ENV['RAILS_ENV'] || 'production'
worker_processes 6
preload_app true
timeout 75
app_dir = File.expand_path("../../..", __FILE__)
shared_path = "#{app_dir}/shared"
working_directory "#{app_dir}/current"
# Set up socket location
listen "#{shared_path}/sockets/unicorn.sock", :backlog => 64
# Set master PID location
pid "#{shared_path}/pids/unicorn.pid"
stderr_path "#{shared_path}/log/unicorn.log"
stdout_path "#{shared_path}/log/unicorn.log"
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
# reset the connection since the pre-forked connection will be stale
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
if defined?(ActiveRecord::Base)
config = ActiveRecord::Base.configurations[Rails.env] ||
Rails.application.config.database_configuration[Rails.env]
config['pool'] = ENV['DB_POOL'] || 5
ActiveRecord::Base.establish_connection(config)
end
end
I got the solution. Basically my model contains the establish_connection definition like this
class User < ActiveRecord::Base
establish_connection "user_#{Rails.env}"
end
and my unicorn.rb file before_fork block contains the following code.
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
I found that above code doesn't work when you explicitly write establish_connection in your model. To resolve the issue I had to change the above code to the below one
defined?(ActiveRecord::Base) &&
ActiveRecord::Base.remove_connection(User) &&
ActiveRecord::Base.connection.disconnect!
and it worked like a charm. :-)

Rails 4. Activerecord validation rules and callbacks are being disabled after unicorn restart

I use Ruby v2.2.3, Rails v4.2.4, Unicorn v5.1.0 in production.
Activerecord validation rules and callbacks for all models of app has been disabled after unicorn restart (from log rotation script). Here it is
/var/www/fitness_crm/shared/log/production.log
/var/www/fitness_crm/shared/log/unicorn.stderr.log
/var/www/fitness_crm/shared/log/unicorn.stdout.log {
daily
missingok
rotate 30
compress
notifempty
create 640 deploy deploy
sharedscripts
postrotate
kill -s USR2 `cat /var/www/fitness_crm/shared/tmp/pids/unicorn.pid`
endscript
}
We've changed the script to send USR1 instead USR2, but nothing has changed. When USR1 is sent validations/callbacks simply keep being disabled. Here is our unicorn configuration file
working_directory "/var/www/fitness_crm/current"
pid "/var/www/fitness_crm/shared/tmp/pids/unicorn.pid"
stdout_path "/var/www/fitness_crm/shared/log/unicorn.stdout.log"
stderr_path "/var/www/fitness_crm/shared/log/unicorn.stderr.log"
listen "/tmp/unicorn.fitness_crm_production.sock"
worker_processes 8
timeout 30
preload_app true
before_exec do |server|
ENV["BUNDLE_GEMFILE"] = "/var/www/fitness_crm/current/Gemfile"
end
before_fork do |server, worker|
# Disconnect since the database connection will not carry over
if defined? ActiveRecord::Base
ActiveRecord::Base.connection.disconnect!
end
# Quit the old unicorn process
old_pid = "#{server.config[:pid]}.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
if defined?(Resque)
Resque.redis.quit
end
sleep 1
end
after_fork do |server, worker|
# Start up the database connection again in the worker
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
if defined?(Resque)
Resque.redis = 'localhost:6379'
end
end
After this we kill unicorn and start it manually with command:
bundle exec unicorn -D -c /var/www/fitness_crm/shared/config/unicorn.rb -E production
After this everything is good and validations and callbacks are enabled again.
Please, help to find out what is the reason of such a behavior and how to fix it.

Why does puma not have a `before_fork` method like Unicorn?

I'm new to working with Puma and have previously worked with Unicorn.
The Unicorn config has a before_fork and after_fork method that disconnects the connection and then restablishes it after the fork.
However, Puma doesn't have that. It only has on_worker_boot which is conceptually similar to the after_fork method.
Doesn't Puma utilize forking of worker processes as well? Doesn't it need to disconnect before forking like Unicorn?
Thanks!
Example files
config/unicorn.rb
before_fork do |server, worker|
# other settings
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
end
after_fork do |server, worker|
# other settings
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
config/puma.rb
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
In fact, it now has this feature: https://github.com/puma/puma/pull/754

ByeBug remote won't hit breakpoints with Unicorn

I launch unicorn through foreman, so the debugger prompt is swallowed. I've had luck with the debugger gem in the past connecting as a remote debugger.
We're about to upgrade to Ruby 2.1.2 which, as I understand it, is not compatible with debugger.
I've changed the remote debugger code to use byebug:
require 'byebug'
def find_available_port
server = TCPServer.new(nil, 0)
server.addr[1]
ensure
server.close if server
end
port = find_available_port
puts "Remote debugger on port #{port}"
Byebug.start_server('localhost', port)
Once unicorn is started, I can connect to byebug:
$ byebug -R localhost:54623
Connecting to byebug server localhost:54623
Connected.
But my code is littered with byebug calls, and they never trigger a breakpoint in the remote debugger. Pages that block on load when the debugger is not remote load normally when connected remotely.
The unicorn file specifies only one worker, so I'm reasonably sure that's not it:
require File.dirname(__FILE__)+'/application'
if Rails.env.development?
worker_processes 1
timeout_override = ENV['WEBSERVER_TIMEOUT_OVERRIDE']
timeout Integer(timeout_override || 3600)
if timeout_override
puts "Development: Using WEBSERVER_TIMEOUT_OVERRIDE of #{timeout_override} seconds"
end
else
worker_processes Integer(ENV['WEB_CONCURRENCY'] || 3)
timeout 25
end
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
Any ideas would be greatly appreciated.
I think you missed one simple setting:
Byebug.wait_connection = true
Change this
port = find_available_port
puts "Remote debugger on port #{port}"
Byebug.start_server('localhost', port)
to this
port = find_available_port
puts "Remote debugger on port #{port}"
Byebug.wait_connection = true
Byebug.start_server('localhost', port)
This should do the magic. Hope it helps.

Zero downtime restart of unicorn/ rails

In our deploy scripts we use the following snippet to restart unicorn:
desc "Zero downtime restart of Unicorn"
task :restart do
run "kill -s USR2 unicorn_pid"
end
The master process forks, starts the new workers and then kills the old one. But now it seems the new master kills the old master and takes over any new connections before the new children are fully up. As we disabled preloading of the app using preload_app false the new workers take around 30 - 60 seconds to start. During this time the new connections/ the website hangs. How to avoid this, so making the new master only take over once the new children are fully up and ready to server requests? :)
Update:
My unicorn.rb looks like this:
# name of application
application = "myapp"
# environment specific stuff
case ENV["RAILS_ENV"]
when "integration", "staging"
worker_processes 1
when "production"
worker_processes 4
else
raise "Invalid runtime environment '#{ENV["RAILS_ENV"]}'"
end
# set directories
base_dir = "/srv/#{application}/current"
shared_path = "/srv/#{application}/shared"
working_directory base_dir
# maximum runtime of requests
timeout 300
# multiple listen directives are allowed
listen "#{shared_path}/unicorn.sock", :backlog => 64
# location of pid file
pid "#{shared_path}/pids/unicorn.pid"
# location of log files
stdout_path "#{shared_path}/log/unicorn.log"
stderr_path "#{shared_path}/log/unicorn.err"
# combine REE with "preload_app true" for memory savings
# http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow
preload_app false
if GC.respond_to?(:copy_on_write_friendly=)
GC.copy_on_write_friendly = true
end
before_exec do |server|
# see http://unicorn.bogomips.org/Sandbox.html
ENV["BUNDLE_GEMFILE"] = "#{base_dir}/Gemfile"
end
before_fork do |server, worker|
# the following is highly recomended for "preload_app true"
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
if defined?(Sequel::Model)
Sequel::DATABASES.each{ |db| db.disconnect }
end
# This allows a new master process to incrementally
# phase out the old master process with SIGTTOU to avoid a
# thundering herd (especially in the "preload_app false" case)
# when doing a transparent upgrade. The last worker spawned
# will then kill off the old master process with a SIGQUIT.
old_pid = "#{server.config[:pid]}.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
I think the main problem is that there's no child_ready hook. The before_fork and after_hook are called "too early". I think I could add some logic in the after_fork hook to somehow detect when a child is ready...but I hope there's an easier solution? :)

Resources