I am having unexpected and significant problems trying to get a Rails app, running under Unicorn, to connect to a password-protected Redis server.
Using bundle exec rails c production on the command line, I can issue commands through Resque.redis. However, it seems that my configuration is being lost when it's forked under Unicorn.
Using a non-password-protected Redis server Just Works. However, I intend to run workers on other servers than where the Redis server lives, so I need this to be password protected.
I have also had success in using a password protected (using the same technique) but using Passenger rather than Unicorn.
I have the following setup:
# config/resque.yml
development: localhost:6379
test: localhost:6379
production: redis://user:PASSWORD#oak.isc.org:6379
.
# config/initializers/redis.rb
rails_root = ENV['RAILS_ROOT'] || File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
$resque_config = YAML.load_file(rails_root + '/config/resque.yml')
uri = URI.parse($resque_config[rails_env])
Resque.redis = Redis.new(host: uri.host, port: uri.port, password: uri.password)
.
# unicorn.rb bootup file
preload_app true
before_fork do |server, worker|
Redis.current.quit
end
after_fork do |server, worker|
Redis.current.quit
end
.
Ok, for the sake of other people who might be googling this problem, I've solved this for myself at least
Basic problem is calling Redis.new other places in the code ,e.g. in your geocoder setup or unicorn config file.
just make sure that every time you call initialize Redis you pass in the appropriate values
e.g. something like
REDIS = Redis.connect(:url => ENV['REDISTOGO_URL'])
everywhere and you should never have
Redis.new
as it will default to localhost and the default port
UPDATED Totally different idea based on #lmarlow's comment to a resque issue.
I bet it breaks wherever you have Redis ~>3 (I mean the ruby client version, not the server version).
As of this writing, Resque needs Redis ~>2 but doesn't specify that in its gemspec. Therefore you have to help it out by adding this to your Gemfile:
gem 'redis', '~>2' # until a new version of resque comes out
gem 'resque'
Also, make sure that bundler is being used everywhere. Otherwise, if your system has a new version of the Redis gem, it will get used and Resque will fail like before.
Finally, a cosmetic note... you could simplify the config to:
# config/initializers/redis.rb
$resque_redis_url = uris_per_environment[rails_env] # note no URI.parse
Resque.redis = $resque_redis_url
and then
# unicorn.rb bootup file
after_fork do |server, worker|
Resque.redis = $resque_redis_url
end
This was helpful to me:
source: https://github.com/redis/redis-rb/blob/master/examples/unicorn/unicorn.rb
require "redis"
worker_processes 3
# If you set the connection to Redis *before* forking,
# you will cause forks to share a file descriptor.
#
# This causes a concurrency problem by which one fork
# can read or write to the socket while others are
# performing other operations.
#
# Most likely you'll be getting ProtocolError exceptions
# mentioning a wrong initial byte in the reply.
#
# Thus we need to connect to Redis after forking the
# worker processes.
after_fork do |server, worker|
Redis.current.quit
end
What worked for me was the unicorn config here: https://stackoverflow.com/a/14636024/18706
before_fork do |server, worker|
if defined?(Resque)
Resque.redis.quit
Rails.logger.info("Disconnected from Redis")
end
end
after_fork do |server, worker|
if defined?(Resque)
Resque.redis = REDIS_WORKER
Rails.logger.info("Connected to Redis")
end
end
I think the issue was with Resque-web. Its config file, they fixed it now. In version 0.0.11, they mention it in a comment as well : https://github.com/resque/resque-web/blob/master/config/initializers/resque_config.rb#L3
Earlier, their file looked like this : https://github.com/resque/resque-web/blob/v0.0.9/config/initializers/resque_config.rb
And, if due to any reasons, you cannot upgrade, then rather try to set the env variable RAILS_RESQUE_REDIS=<host>:<port> instead, as the Initializers are loading after the it tries connect redis(and fails).
Related
Hi I am using CANCAN Gem for user roles and
database ->oracle
oracle adaptor - > oracle_enhanced_adaptor 1.4.1
ruby 1.9.3
rails 3.2.16
web server -> unicorn
when i refresh broser after some time ( 2 or 3 minutes) . It gives us ActiveRecord::StatementInvalid (RuntimeError: The connection cannot be reused in the forked process.:
Anyone help me
I found a answer from google community
dbconfig = ActiveRecord::Base.remove_connection
child_pid = fork do
# establish new db connection for forked child
ActiveRecord::Base.establish_connection(dbconfig)
# do stuff...
end
# re-establish db connection
ActiveRecord::Base.establish_connection(dbconfig)
# detach the process so we don't wait for it
Process.detach(child_pid)
in environment.rb file and it works like a charm
You need to add this to your unicorn.rb file
#config/unicorn.rb
after_fork do |server, worker|
defined?(ActiveRecord::Base) && ActiveRecord::Base.establish_connection
end
and you can run the server with the command unicorn -c config/unicorn.rb to use the config.
So far I am using the Thin server. I am planning on switching to Unicorn to add some concurrency to the web dynos, and I am concerned because I read through this article and I found this code:
before_fork do |server, worker|
# ...
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis.quit
Rails.logger.info('Disconnected from Redis')
end
end
after_fork do |server, worker|
# ...
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis = ENV['REDIS_URI']
Rails.logger.info('Connected to Redis')
end
end
I don't really understand why is this code necessary and if I should add it or not when using Resque.
What do you guys think I should take into account when switching to Unicorn if I am using some Resque workers?
Unicorn is a forking, multi-process server. It loads your Rails environment in one process, then forks a number of workers. Using fork causes it to copy the entire parent process, including any opened connections to databases, memcache, redis, etc.
To fix this, you should re-connect any live connections in the after_fork block as shown in the example. You only need to reconnect connections/services you're using.
Seeing a weird problem starting a Unicorn server - bundle exec ruby unicorn_rails.rb starts okay, but when I visit a URL, it shows:
Mysql2::Error (Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2))
So it seems unicorn isn't connecting to the remote server that's configured in database.yml (as it's trying to connect locally), despite other commands, e.g. bundle exec rails console, working fine. It seems to be ignoring that setting even though the environment is set right. This was working before, but something has broken it.
I put the full stack trace here:
https://gist.github.com/mahemoff/6029630
database.yml:
staging:
adapter: mysql2
database: slide_staging
host: 192.168.1.255
port: 3306
pool: 5
username: deploy
password: <%= ENV['DB_PASS'] || "notconfiguredyet" %>
timeout: 5000
reconnect: true
It might be linked to the unicorn configuration..
Especially if you preload the app.
Do you have these lines inside?
before_fork do |server, worker|
# the following is highly recomended for Rails + "preload_app true"
# as there's no need for the master process to hold a connection
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
# the following is *required* for Rails + "preload_app true",
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
I finally solved this. It turns out the Rails environment was being overridden, as I'd introduced a config snippet that included a bug, using Rails.env = 'test' instead of Rails.env=='test'. I figured it out after noticing my development mode was running in test environment even though ENV['RAILS_ENV'] was correctly set to development.
I am using Mongoid 3, with Rails 3.2.9 and Unicorn for production. Would like to setup a before_fork & after_fork for the connection to mongodb, found the following code for active record:
before_fork do |server, worker|
# Replace with MongoDB or whatever
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
end
end
after_fork do |server, worker|
# Replace with MongoDB or whatever
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
What is the relevant code for Mongoid (to connect and disconnect)?
Update:
You dont actually need to do this, so for people coming to view this question see:
http://mongoid.org/en/mongoid/docs/rails.html
"Unicorn and Passenger
When using Unicorn or Passenger, each time a child process is forked when using app preloading or smart spawning, Mongoid will automatically reconnect to the master database. If you are doing this in your application manually you may remove your code."
Though it would still be interesting to know what would be the equivalent Mongoid code.
You dont actually need to do this, so for people coming to view this question see:
http://mongoid.org/en/mongoid/docs/rails.html
"Unicorn and Passenger
When using Unicorn or Passenger, each time a child process is forked when using app preloading or smart spawning, Mongoid will automatically reconnect to the master database. If you are doing this in your application manually you may remove your code."
Though it would still be interesting to know what would be the equivalent Mongoid code.
What about
::Mongoid.default_session.connect
::Mongoid.default_session.disconnect
https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#usage-with-forking-servers
The documentation on mongodb.com says that after_fork and before_fork for unicorn or passenger are required.
This probably changed recently. This is the 7.0 mongoid documentation
I am successfully using Unicorn server and Delayed_Job on Heroku for my site. However I'm unsure if it's setup the best way and am wanted to get more info on how to view worker processes, etc. My config/unicorn.rb file which works is pasted below:
worker_processes 3
preload_app true
timeout 30
# setting the below code because of the preload_app true setting above:
# http://unicorn.bogomips.org/Unicorn/Configurator.html#preload_app-method
#delayed_job_pid = nil
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
# start the delayed_job worker queue in Unicorn, use " -n 2 " to start 2 workers
if Rails.env == "production"
# #delayed_job_pid ||= spawn("RAILS_ENV=production ../script/delayed_job start")
# #delayed_job_pid ||= spawn("RAILS_ENV=production #{Rails.root.join('script/delayed_job')} start")
#delayed_job_pid ||= spawn("bundle exec rake jobs:work")
elsif Rails.env == "development"
#delayed_job_pid ||= spawn("script/delayed_job start")
# #delayed_job_pid ||= spawn("rake jobs:work")
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
delayed_job says to use RAILS_ENV=production script/delayed_job start to start worker processes in production mode, but if I use this command I get "file not found" errors in Heroku. So, for now I'm using bundle exec rake jobs:work in production, which seems to work, but is this correct?
How many processes are actually running in this setup and could it be better optimized? My guess is that there is 1 Unicorn master process, 3 Web workers and 1 Delayed job worker for a total of 5? When I run in dev mode locally I see 5 ruby pid's being spawned. Perhaps it would be better to use only 2 Web workers and then give 2 workers to Delayed_job (I have pretty low traffic)
All of this is run in a single Heroku dyno, so I have no idea how to check the status of the Unicorn workers, any idea how?
**note, I've commented out lines that break the site in production because Heroku says it "can't find the file"
Your config/unicorn.rb should not spawn a DJ workers like this. You should specify a separate worker process in your Procfile like so:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
You can use foreman for local development to spin up both Unicorn and DJ. Your resulting config/unicorn.rb file would then be simpler:
worker_processes 3
preload_app true
timeout 30
# setting the below code because of the preload_app true setting above:
# http://unicorn.bogomips.org/Unicorn/Configurator.html#preload_app-method
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
As I mentioned in the comments, you're spawning child processes that you never reap, and will likely become zombies. Even if you added code to try and account for that, you're still trying to get single dynos to perform multiple roles (web and background worker), and are likely going to cause you problems down the road (memory errors, et. al).
Foreman: https://devcenter.heroku.com/articles/procfile
DJ on Heroku: https://devcenter.heroku.com/articles/delayed-job
Spawn: http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-spawn