I have a rails app where we want to use both sidekiq (redis queue) and shoryuken (sqs queue) for different use cases.
So now , all the sidekiq workers are defined under app/workers. To separate out shoryuken workers, we thought to define and keep then under app/consumers.
So we are running this command to start shoryuken
shoryuken -r ./app/consumers/ -R -C ./config/shoryuken.yml
my question is , does giving the -r options with worker directory path forces shoryuken to load workers from that path?
Related
I have a rails app using sidekiq, sidekiq-status, and sidekiq batching:
gem "sidekiq"
gem "sidekiq-status"
# freemium version vs sidekiq pro https://github.com/breamware/sidekiq-batch
gem "sidekiq-batch"
# slim & sinatra for sidekiq monitoring
gem "sinatra", "2.0.0.beta2", require: nil
Locally when I run sidekiq and redis, the jobs process.
When deployed to heroku, the jobs are queued but they do not process.
I am using the Rediscloud and have the env variable REDIS_PROVIDER set to REDISCLOUD_URL and both REDISCLOUD_URL and REDIS_URL set to the generated url from the Rediscloud addon.
Procfile:
high_worker: DB_POOL=$SIDEKIQ_DB_POOL bundle exec sidekiq -c $SIDEKIQ_CONCURRENCY -t 8 -q high
default_worker: DB_POOL=$SIDEKIQ_DB_POOL bundle exec sidekiq -c $SIDEKIQ_CONCURRENCY -t 8 -q default
low_worker: DB_POOL=$SIDEKIQ_DB_POOL bundle exec sidekiq -c $SIDEKIQ_CONCURRENCY -t 8 -q low
When I startup a queue manually on heroku, ie heroku run DB_POOL=10 bundle exec sidekiq -c 10 -t 8 -q low -a {my_app_name}, the queue processes those jobs.
What am I missing in my configuration?
Your Procfile defines process types, but it doesn't make them run.
Each process type can run on zero or more dynos. To change the number of dynos for each process type you can use the heroku ps:scale command. For example, to scale your low_worker process type to one dyno you can do
heroku ps:scale low_worker=1
Running dynos cost money and / or consume free dyno hours, so budget accordingly.
Alternatively, you can have your jobs run at scheduled times via the Heroku Scheduler. This is less appropriate for tasks that should run continuously, but it's an option.
I'm confused how heroku and sidekiq work. My Procfile looks like:
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -e $RAILS_ENV
Now inside my rails I run my sidekiq jobs in my code like:
SomeWorker.perform_async(some.id)
Now will this automatically somehow make this process run in the worker dyno?
If yes, how does it just know to run this out of process?
It is confusing because when I am in my main git folder I can run heroku commands and I know this are for my web dyno, but how do I then see the logs for my worker dyno or will these be in my same dyno logs?
When you setup your Procfile, you're telling Heroku to setup 2 types of dynos: web and worker. It's likely that these are using the same Rails app code but are starting up with different commands (bundle exec puma vs. bundle exec sidekiq). You then can scale however many VMs (dynos) that you need for each type of process.
The glue that holds the two together is Redis. When you run SomeWorker.perform_async(some.id) from your web process, you're adding a record to Redis describing a job to run. Your worker process watches Redis for new records and then processes them.
The Heroku logs show logs from all running dynos. So you should see logs from both your web and worker processes mixed in together.
Let's say my process will run 3 workers, and I want to dedicate 1 of them to process web requests, and 2 to handle Sidekiq background jobs, each process potentially being multi-threaded. Is there an easy or best-practices way to handle this? I've tried a few different configurations, but they either give me an error, or just don't process the jobs at all.
I'm using Rails 4 and ActiveJob, but I don't think those points are relevant.
You don't have to take care of sidekiq worker scheduling from you rails application. Sidekiq runs a separate process with the whole rails environment for managing the background workers. Each Worker is a separate Thread, backed by the Celluloid Actor framework. So for your setup you just do the following:
# start a single puma web server process with min 4 and max 16 Threads
$ bundle exec puma -t 4:16
# start a single multithreaded sidekiq process,
# with councurrency settings defined in sidekiq.yml
$ bundle exec sidekiq --pidfile tmp/pids/sidekiq_1.pid
# start another sidekiq process (if really necessary)
$ bundle exec sidekiq --pidfile tmp/pids/sidekiq_2.pid
I have been operating under 0 workers, but need to increase the speed at which I'm processing background tasks. I am using Sidekiq for all of my background workers.
When I increase the worker dyno count to 1, I keep getting this error in my heroku logs:
dont know how to build task 'jobs:work'
From researching this, it seems like the issue is that heroku worker dynos are reliant on delayed_job and I am not using delayed_job anywhere.
If I install delayed_job, what will I have to change to get sidekiq to work? Or do I even need delayed_job?
Update your projects Procfile to specify sidekiq for the worker:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq
Then redeploy your application.
I think Heroku defaults to trying to run delayed_job if you don't specify the worker in your Procfile.
I have a server with apache + passenger.
How will I run sidekiq in production? Any configuration needed to run the
bundle exec sidekiq
Thanks
bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e production
-d, Daemonize process
-L, path to writable logfile
-C, path to YAML config file
-e, Application environment
A better solution than using the daemonization -d flag is to leverage the process supervisor provided by your OS. This is also the recommendation given by the sidekiq gem's wiki:
I strongly recommend people not to use the -d flag but instead use a process supervisor like systemd or upstart to manage Sidekiq (or any other server daemon). This ensures Sidekiq will immediately restart if it crashes for some reason.
The wiki provides a example config files for both upstart and systemd found in the "examples" directory of the repo.
NOTE
On my CentOS 7 server I use rvm (Ruby Version Manger). I had to perform an extra step to ensure that my systemd script (/etc/systemd/system/sidekiq.service) could reliably start and stop sidekiq, even in the case where my ruby and/or gemset paths change in the future. The most important directive is "ExecStart", which looks like the following in my script:
ExecStart=/usr/local/rvm/wrappers/surveil/bundler exec sidekiq -e production -L log/sidekiq.log -C config/sidekiq.yml
The part of the path "/usr/local/rvm/wrappers/surveil", is actually a symlink, which I recreate with the assistance of 'rvm alias' during deployment to ensure that it always points to the app's ruby version and gemset, both of which could feasibly change from one deployment to another. This is achieved by creating a rake task that runs during deployment and does the equivalent of the following:
rvm alias delete surveil
rvm alias create surveil ruby-#{new_ruby_version}##{new_gemset_name}
By setting up this alias/symlink during deployment, I can safely leave the systemd script untouched and it will keep working fine. This is because the path "/usr/local/rvm/wrappers/surveil/bundler" always points to the correct version of bundler and thus beneifts from the bundler magic that causes its targets to run in the app's configured ruby/gem environment.
You should be able to start Sidekiq as a background process (daemon) by passing the -d argument when you start it up:
bundle exec sidekiq -d.
Although this answer should work for you now, please be aware that if the sidekiq process crashes for any reason the process will have to be manually restarted. A good starting place for finding out about more robust ways to run sidekiq in production is here: https://github.com/mperham/sidekiq/wiki/Deployment