How to add a new sidekiq process? - ruby-on-rails

Currently, we have one process in Production serving all of our workers. How to add a new process/sidekiq instance?
Is there a way to spawn a new process that would be dedicated to a specific queue?

For all things Sidekiq, the Sidekiq Documentation is the best place to start.
In your case, you'll be specifically interested in the "Advanced Options" documentation.

A couple of ways:
sidekiqswarm for paid sidekiq pro
one terminal run bundle exec sidekiq. open anther terminal run bundle exec sidekiq too. Then, you got 2 processes now
put sidekiq service inside docker image. run 3 sidekiq docker instances for the sidekiq image. You got 3 processes

Related

Rails/Capistrano: check if sidekiq is running on an EC2 instance

I'm using an EC2 instance for hosting a rails application. I'm deploying with capistrano and I had already included sidekiq and it's working fine. However, sometimes on deploy, and sometimes sporadically, sidekiq stops running and I don't notice until some tasks that use sidekiq doesn't run.
I could do something on deploy to check that, but if it stops to work eventually after deploy, that would still be a problem.
I would like to know what is the best way, in that scenario, to check periodically if sidekiq is running, and if not to, run it.
I thought of doing a bash script for that, but apparently, when I run sidekiq from command line, it creates another process with a different pid of the one launched by sidekiq... so I think it could get messy.
Any help is appreciated. Thanks!
Learn and use systemd to manage the service.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

How to detect orphaned sidekiq process after capistrano deploy?

We have a Rails 4.2 application that runs alongisde a sidekiq process for long tasks.
Somehow, in a deploy a few weeks ago something went south (the capistrano deploy process didn't effectively stopped it, wasn't able to figure out why) and there was left an orphaned sidekiq process running that was competing with the current one for jobs on the redis queue. Because this process source became outdated it started giving random results on our application (depending on which process captured the job) and we got a very hard time until we figured this out..
How I can stop this from happenning ever again? I mean, I can ssh into the VPS after every deploy and run ps aux | grep sidekiq to check if there is more than one.. but it's not practical.
Use your init system to manage Sidekiq, not Capistrano.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

One Heroku worker process for both delayed_job and sidekiq?

Currently our Heroku app has two dynos: web and worker. The worker dyno is set up to run bundle exec rake jobs:work, which starts up delayed_job. I have some new Sidekiq jobs that I also need to run. (I plan to convert our delayed_job jobs to Sidekiq soon, but haven't yet.) My question is: do I need to add and pay for a third Heroku dyno ("sidekiqworker"?), or is there a way for me to specify that my existing worker dyno run both delayed_job and Sidekiq?
You will need to pay for a third heroku dyno unfortunately. I've experimented with naming both processes as "Workers" but only one would be registered while the other one wouldn't be. When adding a new process name, heroku updates and will set that new process name to 0 dynos.
Refer to this for more details multiple worker/web processes on a single heroku app

Keeping rake jobs:work running

I'm using delayed_job to run jobs, with new jobs being added every minute by a cronjob.
Currently I have an issue where the rake jobs:work task, currently started with 'nohup rake jobs:work &' manually, is randomly exiting.
While God seems to be a solution to some people, the extra memory overhead is rather annoying and I'd prefer a simpler solution that can be restarted by the deployment script (Capistrano).
Is there some bash/Ruby magic to make this happen, or am I destined to run a monitoring service on my server with some horrid hacks to allow the unprivelaged account the site deploys to the ability to restart it?
I'd suggest you to use foreman. It allows you to start any number of jobs in development by using foreman run, and then export your configuration (number of processes per type, limits etc) as upstart scripts, to make them available to Ubuntu's upstart (why invoking God when the operating system already has this for free??).
The configuration file, Procfile, is also exactly the same file Heroku uses for process configuration, so with just one file you get three process management systems covered.

Running delayed_job worker on Heroku?

So right now I have an implementation of delayed_job that works perfectly on my local development environment. In order to start the worker on my machine, I just run rake jobs:work and it works perfectly.
To get delayed_job to work on heroku, I've been using pretty much the same command: heroku run rake jobs:work. This solution works, without me having to pay anything for worker costs to Heroku, but I have to keep my command prompt window open or else the delayed_job worker stops when I close it. Is there a command to permanently keep this delayed_job worker working even when I close the command window? Or is there another better way to go about this?
I recommend the workless gem to run delayed jobs on heroku. I use this now - it works perfectly for me, zero hassle and at zero cost.
I have also used hirefireapp which gives a much finer degree of control on scaling workers. This costs, but costs less than a single heroku worker (over a month). I don't use this now, but have used it, and it worked very well.
Add
worker: rake jobs:work
to your Procfile.
EDIT:
Even if you run it from your console you 'buy' worker dyno but Heroku has per second biling. So you don't pay because you have 750h free, and month in worst case has 744h, so you have free 6h for your extra dynos, scheduler tasks and so on.
I haven't tried it personally yet, but you might find nohup useful. It allows your process to run even though you have closed your terminal window. Link: http://linux.101hacks.com/unix/nohup-command/
Using heroku console to get workers onto the jobs will only create create a temporary dyno for the job. To keep the jobs running without cli, you need to put the command into the Procfile as #Lucaksz suggested.
After deployment, you also need to scale the dyno formation, as heroku need to know how many dyno should be put onto the process type like this:
heroku ps:scale worker=1
More details can be read here https://devcenter.heroku.com/articles/scaling

Resources