Sidekiq - Prevent worker from being executed in specific machine - ruby-on-rails

I'm working on a Rails project that uses Sidekiq. Our Sidekiq implementation has two workers (WorkerA, that reads queue_a, and WorkerB, which reads queue_b). One of them has to be executed in the same server the Rails app is and the other one in a different server(s). How can I prevent WorkerB from being executed in the first server, and vice versa? Can a Sidekiq process be configured to run just specific workers?
EDIT:
The Redis server is in the same machine the Rails app is.

Use a hostname-specific queue. config/sidekiq.yml:
---
:verbose: false
:concurrency: 25
:queues:
- default
- <%= `hostname`.strip %>
In your worker:
class ImageUploadProcessor
include Sidekiq::Worker
sidekiq_options queue: `hostname`.strip
def perform(filename)
# process image
end
end
More detail on my blog:
http://www.mikeperham.com/2013/11/13/advanced-sidekiq-host-specific-queues/

well, here is the way to start sidekiq with options
nohup bundle exec sidekiq -q queue_a queue_b -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 &
you can start sidekiq with specific workers
you can run nohup bundle exec sidekiq -q queue_a -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 & to execute only WorkA
to distinguish different workers on different servers, just do like below:
system "nohup bundle exec sidekiq -q #{workers_string} -c 5 -e #{Rails.env} -P #{pidfile} 2>&1 &"
def workers_string
if <on server A> # using ENV or serverip to distinguish
"queue_a"
elsif <on server B>
"queue_b queue_b ..."
end
end
#or you can set the workers_strings into config file on different servers

Related

Sidekiq deployment fails with "key not found: "MY_APP_DATABASE_PASSWORD"

This is my rookie first question to the community.
Background:
I try to deploy Sidekiq on a my own Jessie Debian server for a Rails 5.0.6 app that works with Phusion Passenger with a user "deploy" . I have Redis 3.2.6 installed and tested ok. I've opted for a Systemd daemon to start Sidekiq as a system service.
Here is the configuration :
[Unit]
Description=sidekiq
After=syslog.target network.target
[Service]
Type=simple
WorkingDirectory=/var/www/my_app/code
ExecStart=/bin/bash -lc 'bundle exec sidekiq -e production -C config/sidekiq.yml'
User=deploy
Group=deploy
UMask=0002
# if we crash, restart
RestartSec=4
#Restart=on-failure
Restart=always
# output goes to /var/log/syslog
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
Here is sidekiq.yml
---
:verbose: true
:concurrency: 4
:pidfile: tmp/pids/sidekiq.pid
:queues:
- [critical, 2]
- default
- low
production:
:concurrency: 15
And finally #config/initializers/sidekiq.rb:
Sidekiq.configure_server do |config|
config.redis = { url: "redis://#{ENV['SITE']}:6379/0", password: ENV['REDIS_PWD'] }
end
Sidekiq.configure_client do |config|
config.redis = { url: "redis://#{ENV['SITE']}:6379/0", password: ENV['REDIS_PWD'] }
end
How it fails
I've been trying to solve the following error found in /var/log/syslog:
Dec 18 00:13:39 jjflo systemd[1]: Started sidekiq.
Dec 18 00:13:48 jjflo sidekiq[8159]: Cannot load `Rails.application.database_configuration`:
Dec 18 00:13:48 jjflo sidekiq[8159]: key not found: "MY_APP_DATABASE_PASSWORD"
which ends up in a sequence of sidekiq failure and a retry...
Yet another try
I have tried the following and this works :
cd /var/www/my_app/code
su - deploy
/bin/bash -lc 'bundle exec sidekiq -e production -C config/sidekiq.yml'
Could someone help me connect the dots, please ?
Environment variable were obviously the problem. Since I was using
ExecStart=/bin/bash -lc 'bundle...
where -l was refering to a bash interactive session, I had to get into .bashrc of deploy user to move the export lines at the top of the file instead of the bottom where they used to be, or at least before this line of .bashrc :
case $- in
This post helped me a lot.

Sidekiq consumes too much memory

I am using Sidekiq with God in my Rails app. I am using Passenger and Nginx.
I see many processes (30-50) running by sidekiq which consume about 1000MB of RAM.
Processes like:
sidekiq 3.4.1 my_app_name [0 of 1 busy] - about 30 processes.
ruby /home/myuser/.rvm/ruby-2.1.5/bin/sidekiq --environment ... - about 20 processes.
How to tell sidekiq to not run so many threads.
my config for sidekiq (config/sidekiq.yml):
---
:concurrency: 1
:queues:
- default
- mailer
and config for sidekiq for god:
num_workers = 1
num_workers.times do |num|
God.watch do |w|
...
w.start = "bundle exec sidekiq --environment #{rails_env} --config #{rails_root}/config/sidekiq.yml --daemon --logfile #{w.log}"
The problem is with "--daemon" (or "-d") parameter which runs it as a daemon. No need to run it as daemon. Just remove this parameter.

Sidekiq not running at startup of passenger server in Rails 4.1.6 app

I need Sidekiq to run once I start the server on our staging application. We moved to a different server instance on Rackspace to better mirror our production conditions.
The application is started with
passenger start --nginx-config-template nginx.conf.erb --address 127.0.0.1 -p 3002 --daemonize
The sidekiq files are as follows:
# /etc/init/sidekiq.conf - Sidekiq config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Sidekiq instances at once.
#
# Save this config as /etc/init/sidekiq.conf then mange sidekiq with:
# sudo start sidekiq index=0
# sudo stop sidekiq index=0
# sudo status sidekiq index=0
#
# or use the service command:
# sudo service sidekiq {start,stop,restart,status}
#
description "Sidekiq Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# change to match your deployment user
setuid root
setgid root
respawn
respawn limit 3 30
# TERM is sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM
instance $index
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv
exec /bin/bash <<EOT
# use syslog for logging
exec &> /dev/kmsg
# pull in system rbenv
export HOME=/root
source /etc/profile.d/rbenv.sh
cd /srv/monolith
exec bin/sidekiq -i ${index} -e staging
EOT
end script
and workers.conf
# /etc/init/workers.conf - manage a set of Sidekiqs
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See sidekiq.conf for how to manage a single Sidekiq instance.
#
# Use "stop workers" to stop all Sidekiq instances.
# Use "start workers" to start all instances.
# Use "restart workers" to restart all instances.
# Crazy, right?
#
description "manages the set of sidekiq processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Sidekiq processes you want
# to run on this machine
env NUM_WORKERS=2
pre-start script
for i in `seq 0 $((${NUM_WORKERS} - 1))`
do
start sidekiq index=$i
done
end script
When I go into the server and try service sidekiq start index=0 or service sidekiq status index=0, it can't find the service, but if I run bundle exec sidekiq -e staging, sidekiq starts up and runs through the job queue without problem. Unfortunately, as soon as I close the ssh session, sidekiq finds a way to kill itself.
How can I ensure sidekiq runs when I start the server and that it will restart itself if something goes wrong as mentioned in the use of upstart to run sidekiq?
Thanks.
In order to run Sidekiq as a service you should put script called "sidekiq" in
/etc/init.d
, not /etc/init

Unicorn & Heroku - is a different configuration for each dyno possible?

I'm currently running my app on 2 Heroku dynos. From what I've looked up so far I need to add something similar to config/unicorn.rb:
worker_processes 3
timeout 30
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake " + \
"resque:work QUEUES=scrape,geocode,distance,mailer")
end
I have a few different background jobs to process, some need to be run single-threaded and some concurrently. The problem with this configuration is that on both Unicorn instances it will spawn exactly the same resque worker (same queues etc).
It would greatly simplify everything if I could change the type of queues each worker processes - or even have one instance running a resque worker and the other a sidekiq worker.
Is this possible?
Perhaps you are confusing unicorn worker_processes and Heroku workers?
you can use your Procfile to start Heroku workers for each queue and a unicorn process for handling web requests.
try this setup
/config/unicorn.rb
worker_processes 4 # amount of unicorn workers to spin up
timeout 30 # restarts workers that hang for 30 seconds
/Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq
worker: bundle exec rake resque:work QUEUES=scrape,geocode,distance,mailer

Getting started with Rails on Heroku using a Procfile

Using a vanilla rails install using git (in fact following the heroku guide here https://devcenter.heroku.com/articles/rails3)
However it mentions the creation of a Procfile
web: bundle exec rails server thin -p $PORT -e $RACK_ENV
Yet if you run this is needs using foreman start, you receive an error because you haven't defined the RACK_ENV
20:45:26 web.1 | started with pid 26364 20:45:27 web.1 |
/SomeLocalPath/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.2/lib/rails/commands/server.rb:33:in
`parse!': missing argument: -e (OptionParser::MissingArgument)
Where should this -e argument be stored for this all to work?
I guess you mean that you are getting this error on your local development machine.
You can set the RACK_ENV when starting foreman like this, for example:
RACK_ENV=development foreman start
Or you could use a different procfile for development (e.g. "Procfile-dev") which has the value for the option -e inline, like this:
web: bundle exec rails server thin -p 3000 -e development
and call it with:
foreman start -f Procfile-dev
(On Heroku, it should just work, because when you run "heroku config -s" while you are in your app-folder, you should see "RACK_ENV=production", so the needed environment variable is set correctly here).

Resources