I have a Rails 5 API with Sidekiq and capistrano-sidekiq that has been happily working fine for the last few months.
The other day, Sidekiq stopped processing jobs. Checking the logs, I saw
bundler: failed to load command: sidekiq (/home/user/project/shared/bundle/ruby/2.2.0/bin/sidekiq)
SignalException: SIGHUP
/home/user/project/shared/bundle/ruby/2.2.0/gems/activesupport-5.0.0.1/lib/active_support/core_ext/module/attribute_accessors.rb:119:in `<class:Module>'
/home/user/project/shared/bundle/ruby/2.2.0/gems/activesupport-5.0.0.1/lib/active_support/core_ext/module/attribute_accessors.rb:6:in `<top (required)>'
... (snip)
Whenever I try to start up Sidekiq, the above appears in the log. It was shut down using the quiet command (USR1) and exited properly.
INFO: Received USR1, no longer accepting new work
I'm using Capistrano to deploy, which has worked fine until this happened. This is the command Capistrano used to start Sidekiq:
INFO [2aac3b89] Running $HOME/.rbenv/bin/rbenv exec bundle exec sidekiq --index 0 --pidfile /home/user/project/shared/tmp/pids/sidekiq-0.pid --environment production --logfile /home/user/project/shared/log/sidekiq.log --daemon as user#xxx.xxx.xxx.xxx
DEBUG [2aac3b89] Command: cd /home/user/project/current && ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.2.3" ; $HOME/.rbenv/bin/rbenv exec bundle exec sidekiq --index 0 --pidfile /home/user/project/shared/tmp/pids/sidekiq-0.pid --environment production --logfile /home/user/project/shared/log/sidekiq.log --daemon )
INFO [2aac3b89] Finished in 1.176 seconds with exit status 0 (successful).
What is going on? And how can I ensure it doesn't happen in future?
In capistrano, I had :pty set to true. I guess it was killing the process before it had a chance to start up. I still am not sure why this is an issue now, but setting :pty to false seems to have done the trick.
Related
I am using rails 4.0 and ruby 2.3.
I am using following gems for capscript.
capistrano (3.4.0)
capistrano-bundler (1.1.4)
capistrano-rails (1.1.5)
I am using capistrano/sidekiq module for running sidekiq related tasks.
When I am running
cap staging sidekiq:start
I get the following error:
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#10.50.11.190: sidekiq exit status: 1
sidekiq stdout: Nothing written
sidekiq stderr: Nothing written
SSHKit::Command::Failed: sidekiq exit status: 1
sidekiq stdout: Nothing written
sidekiq stderr: Nothing written
Below is the command which failed on running the above task.
cd /opt/optimus_apps/merchant_tracking/current && /usr/bin/env sidekiq --index 0 --pidfile /opt/optimus_apps/merchant_tracking/shared/server/tmp/pids/sidekiq-0.pid -- environment staging --logfile /opt/optimus_apps/merchant_tracking/shared/server/log/sidekiq.log --config /opt/optimus_apps/merchant_tracking/shared/server/config/sidekiq.yml --daemon
I tried running the above command directly on the staging server with a small change and it worked. I ran the following command.
cd /opt/optimus_apps/merchant_tracking/current/server && /usr/bin/env sidekiq --index 0 --pidfile /opt/optimus_apps/merchant_tracking/shared/server/tmp/pids/sidekiq-0.pid --environment staging --logfile /opt/optimus_apps/merchant_tracking/shared/server/log/sidekiq.log --config /opt/optimus_apps/merchant_tracking/shared/server/config/sidekiq.yml --daemon
I changed the rails path from
/opt/optimus_apps/merchant_tracking/current --> /opt/optimus_apps/merchant_tracking/current/server
The problem is my rails app lies inside current/server folder.
So my question is how can I make sidekiq task to pick current/server folder rather than the current folder?
Thanks in advance.
I'm using a before_restart.rb hook in opsworks and I have a problem when it run "rake i18n:js:export". I don't know why is running sidekiq with this rake. it fails only in setup stage of opsworks. When I deploy it the error disappears.
[2015-01-09T18:52:17+00:00] INFO: deploy[/srv/www/XXX] queueing checkdeploy hook /srv/www/XXX/releases/20150109185157/deploy/before_restart.rb
[2015-01-09T18:52:17+00:00] INFO: Processing execute[rake i18n:js:export] action run (/srv/www/XXXX/releases/20150109185157/deploy/before_restart.rb line 3)
Error executing action `run` on resource 'execute[rake i18n:js:export]'
Mixlib::ShellOut::ShellCommandFailed
Expected process to exit with [0], but received '1'
---- Begin output of bundle exec rake i18n:js:export ----
STDOUT: 2015-01-09T18:52:30Z 1808 TID-92c6g INFO: Sidekiq client with redis options {}
STDERR: /home/deploy/.bundler/XXXX/ruby/2.1.0/gems/redis-3.1.0/lib/redis/client.rb:309:in `rescue in establish_connection': Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED) (Redis::CannotConnectError)
Sidekiq client (NOT sidekiq server) is running because it is defined in an initializer. When rake runs, it loads the entire rails app environment. So either allow for an environment variable to disable sidekiq client in config/initializers/sidekiq.rb or make sure redis-server is properly configured on the instance you're running this on.
unless ENV['DISABLE_SIDEKIQ']
# Sidekiq.configure...
end
DISABLE_SIDEKIQ=true bundle exec rake do:stuff
I'm using Puma as application server for my Rails 4 project on MRI 2.1.0. I'm using Capistrano 3 to handle deployments. Everything is working like a charm. But, I recently noticed an issue with my deployment process. If I change my Gemfile then, puma fails to complete phased-restart and eventually all workers get killed. I'm running Puma in cluster mode and preload_app! is set true.
Here is my Capistrano recipe to handle phased-restart.
desc "Restart the application (phased restart)"
task :phased_restart do
on roles(:app) do |h|
execute "cd #{fetch(:current_path)} && bundle exec pumactl -S #{fetch(:puma_state)} phased-restart", :pty => true
end
end
This is truncated output of Capistrano log.
DEBUG [4790766f] Command: cd /home/app/current && bundle exec pumactl -S /home/app/shared/tmp/pids/puma.state phased-restart
DEBUG [de00176a] Command phased-restart sent success
INFO [de00176a] Finished in 0.909 seconds with exit status 0 (successful).
This is my config/puma.rb file.
#!/usr/bin/env puma
require 'active_support'
environment 'production'
daemonize
pidfile '/home/app/shared/tmp/pids/puma.pid'
state_path '/home/app/shared/tmp/pids/puma.state'
stdout_redirect 'log/puma_stdout.log', 'log/puma_stderr.log'
threads 100, 100
bind 'tcp://0.0.0.0:9292'
bind 'unix:////home/app/shared/tmp/pids/puma.sock'
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{Rails.root}/config/database.yml")[Rails.env])
end
end
workers 4
preload_app!
Does anybody see anything wrong in my puma config file?
So, currently I do bundle exec cap production deploy:start to start Puma when this happens. But, I want zero-downtime-deployment in every cases.
Can Puma keep using old worker processes in case new spawned processes couldn't be started?
Do you know that preload_app! conflicts with phased restarts?
Proof: https://github.com/puma/puma/blob/0ea7af5e2cc8fa192ec82934a4a47880bdb592f8/lib/puma/configuration.rb#L333-L335
I think first you need to decide which to use.
For doing a phased restart you need to enable the prune_bundler option and disable preload_app!
See https://github.com/puma/puma/blob/master/DEPLOYMENT.md#restarting
To do zero-downtime deploys with Capistrano, you can use the capistrano3-puma gem with the following options:
set :puma_preload_app, false
set :puma_prune_bundler, true
It's my first attempt to get Redis working on Heroku.
I've added one worker dyno (just today, so didn't pay yet), added RedisToGo Nano add-on, tested background jobs on my local machine, and pushed the app to heroku.
heroku ps
gives
=== web: `bundle exec rails server -p $PORT`
web.1: up 2013/03/03 18:26:09 (~ 37m ago)
=== worker: `bundle exec rake jobs:work`
worker.1: crashed 2013/03/03 19:02:15 (~ 1m ago)
Sidekiq Web Interface says that one job is enqueued, but zero processed or failed.
I'm guessing it's because my worker dyno is crashed.
Are there any noob mistakes that I don't know about?
(e.g. I need to run some command to start listening to background jobs etc)
heroku logs --tail doesn't show any errors, so I don't understand why my worker dyno chashes.
I did some research and fixed it like this:
Under app's root directory I created a file called "Procfile" with this content:
web: bundle exec rails server -p $PORT
worker: bundle exec sidekiq -c 5 -v
Got this idea from here.
After that it worked ok.
Also make sure you setup REDIS_PROVIDER:
heroku config:set REDIS_PROVIDER=REDISTOGO_URL
Sidekiq's GitHub page also has instruction. Click here
I was following railscast for delayed job. Things are working perfectly on my machine. How can start delayed_job workers in production mode?
I am using delayed_job gem,(2.1.4)
RAILS_ENV=production script/delayed_job start
For Rails 4
RAILS_ENV=production bin/delayed_job start
Solved my problem.
It may give you an error that tmp directory doesn't exists. Just create one and run previous command again..
You can try to run the following command:
RAILS_ENV=production cd ~/path_to_your_app/current && /usr/local/bin/ruby ./script/delayed_job start
where you should adjust /usr/local/bin/ruby based on your production server ruby configuration.