Kill multiple Sidekiq jobs from the same worker - ruby-on-rails

I would like to know how to kill many Sidekiq jobs from the same worker at once.
I deployed a bug on a production environment and there are queued jobs that are bugging out. I can simply fix the bug and deploy again, but the jobs are time-sensitive (they send out SMS alert to people).
When the bug is gone, the jobs will be executed and many people will get outdated SMS alerts. So I would like to kill all the jobs from that worker before deploying my fix.
Any suggestions? The buggy jobs are enqueued with many other jobs and I can't just remove all jobs from one queue.

Ideally you should enqueue those messages to a different queue so you can clear that queue on its own. There's no other efficient way to remove a set of jobs.

Related

Sidekiq scheduled jobs gets automatically deleted (Sidekiq + Rails)

I want the worker to run on a specific date. I am able to schedule jobs in sidekiq. And sidekiq UI also shows scheduled jobs perfectly. But due to unknown reason my data on sidekiq (processed count,scheduled jobs etc.) gets deleted and everything is reset to 0 in sidekiq UI. Can someone please help me understand this issue.
I suspect you are calling flush on Redis, clearing all your data.

how to continuously deploy with long running jobs

We currently use delayed_job and rails to manage some long running jobs in our system. Some of these jobs take potentially hours to run, but we also like to deploy rather frequently, often many times a day. The problem with this setup is that we have to restart delayed_job during deployment to pick up code changes, so that any new jobs are processed with the latest code.
The solution we've arrived at is that for any job that needs to run for more than some small amount of time, we fork the delayed job so that it returns immediately, and the forked process handles the work. This way a deploy can restart all the delayed job processes, while the long-running 'job' keeps going until it's finished as an orphaned process.
We've looked at sidekiq, but it looks like we'd have the same issue there when trying to deploy new code.
Has anyone developed a solution they would recommend for dealing with long-running background processes that span multiple deployments?

How can I configure Delayed jobs to not wait for a task before starting the others?

I am using Delayed jobs for my Ruby app hosted in Heroku to perform a very long task that can take up to 5 minutes.
I've noticed that, in development mode at least, when this task is running the ones that come afterwards are not started until that one finishes. I would like other tasks to be able to start running without having to wait for the other to finish (to have at least 3 concurrent tasks, for example).
I don't wish to increase the number of workers in Heroku ($$$).
I noticed the 'pool' param in delayed jobs but I don't fully understand if this is what I need or how to use it.
https://github.com/collectiveidea/delayed_job/blob/master/README.md
I achieved it using threads in the task code, but maybe this is not the best way to do it.
If you could tell me exactly how I could achieve concurrency in delayed jobs I would really appreciate it.
A DJ worker only runs a single job at a time. If you want concurrent processing of your background jobs, you'll need multiple background workers.
You are way better off implementing sidekiq.

Rails delayed_job - how can I make sure there is workers running

If I forget to start delayed_job workers on server, the delay jobs will always pending and seems I can't get any errors from Delayed::Job API. is there any easy way to debug with this mistake? I have a dashboard to list the failed background jobs for admins, it could be great to have an alert if there is no worker running. Thanks!
Hmmm, I can find a empty job with Delayed::Job in this case, and it will continue if the worker working again, so I think it should be fine.

Can 2 sidekiq worker threads process the same job?

Is it possible that 1 job is being processed twice by 2 different sidekiq threads? I am using sidekiq to insert some analytics events into a mongodb collection, asynchronously. I see around 15 duplicates in that collection. My guess is that 2 worker threads picked the same job, at the same time, and added to the collection.
Does sidekiq ensure that the job is picked only by 1 thread. We can ignore the restart case, as the jobs are small and will complete in less than 8s.
Is firing analytics events asynchronously using sidekiq not a good practice? What are my options? I could add a unique key to the event and check it before insert to avoid insertion of duplicates, but that's adding data (+ an overhead/query) that I am never going to use (and it adds up for millions of events). Can I somehow ensure that a job is processed only once by sidekiq?
Thanks for your help.
No. Sidekiq uses Redis as a work queue for background processing. Redis provides atomic operations for adding jobs to the queue and popping jobs off of the queue (specifically the redis BRPOP command). Each Sidekiq worker tries to fetch a job from the queue with a timeout via BRPOP and any given job popped from the queue will only be returned to one of the workers pulling work from the queue.
What is more likely is that you are enqueuing multiple jobs.
Another possibility is that your job is throwing an error, causing it to partially execute, and then be re-tried multiple times. By default Sidekiq will retry failed jobs, but doesn't have any built in mechanism for transactions/atomicity of work. ie: If your sidekiq job does A, B, and C and doing B raises an exception, causing the job to fail - it will be retried, causing A to be run again each time the job is retried.

Resources