Remove resque jobs permanently? - ruby-on-rails

Is there a way to permanently remove jobs from a resque queue? The following commands remove the jobs, but when I restart the workers and the resque server, the jobs load back up.
Resque::Job.destroy("name_queue", Class)
OR
Resque.remove_queue("name_queue")

The problem is you're not removing the specific instance of the job that you added to your Redis server through resque. So when you remove the queue then add it back when you restart the server, all the data from that queue could still be in your Redis server. You can work around this in your job.perform depending on your implementation. For instance, if you want to manipulate a model through resque you could check to see if that model has been destroyed before manipulating it.

Related

Is it safe to rename a Sidekiq worker with jobs still in the queue?

Can I rename a Sidekiq worker and deploy it in one step without worrying about jobs getting orphaned looking for the previous name? Or do I need to do a 2-step deploy to make sure the original jobs have drained from the queue before deleting the original worker?
For example, if I wanted to rename EmailSignupWorker to EmailRegistrationWorker, do I neeed to:
Create a new EmailRegistrationWorker with the same contents as EmailSignupWorker and use that new worker for all instances where EmailSignupWorker was being used.
Deploy.
Wait for any EmailSignupWorker jobs to drain.
Delete EmailSignupWorker.
Deploy.
It is not safe. You can do this:
class A
end
B = A
to alias B to A instead of copying the code.

How edit job in sidekiq queue?

I have a queue that happened to contain wrong parameters for the perform_async worker. I don't want to loose the jobs, but edit arguments so they will succeed next time or on forced retry.
Is this possible?
Sidekiq stores his jobs in Redis so maybe try using some GUI for redis (like http://redisdesktop.com/) find job you need to update, edit, save. It can be done in some loop to update multiple of them.

Redis gets flushed mysteriously

I am running a rails app and using redis for jbulder's cache and sidekiq queue. I use sidekiq to send emails asyncly, everytime when I try to send mass emails, say 20k emails in background using sidekiq, after a while, all the background jobs in sidekiq queue are cleared, left 0 jobs in queue.
I filed an issue on sidekiq github page(link), the author said it could be something or someone flushing my redis. There's no one flush redis manually and I wonder how can I find when and how redis gets flushed.
I've checked redis log file with nothing strange.
Here is the documentation on changing certain commands. Perhaps consider changing flushAll and flushDB to something abnormal.

Resque worker not responding to signals

I'm using version 1.25.2 of Resque in a Rails app.
I tried to invoke the instance methods pause_processing and its reverse unpause_processing of Resque::Worker class on all the workers I fetched through Resque.workers. However the workers still continued to process new jobs added dynamically to any queue. When checked for the state through instance.paused? every worker returned true.
Not sure if I can really control the workers running in background.
As far as I can comprehend pause_processing,unpause_processing and shutdown do the same thing as sending USR2 CONT and KILL signals to Resque workers.
Am I missing something trivial or is there another way to manage the workers.
This is because you're calling this method and modifying state on an entirely different instance of the worker. As you can see all it does is set the value of an instance variable. When you call Resque.workers, you get instances of all the workers—but not the instance of the running worker in a different process. Resque doesn't allow you to modify the state of a running worker remotely. You must interact with it by signals if you want to change the state of a Resque worker from the outside. If you're just calling the methods, you're not sending a signal.

How long can a sucker_punch worker run for on heroku?

I have sucker_punch worker which is processing a csv file, I initially had a problem with the csv file disappearing when the dyno powered down, to fix that i'm gonna set up s3 for file storage.
But my current concern is whether a dyno powering down will stop my worker in it's tracks.
How can I prevent that?
Since sucker_punch uses a separate thread on the same dyno and does not use an external queue or persistence (the way delayed_job, sidekiq, and resque do) you will be subject to losing the job when your dyno gets rebooted or stopped and you'll have no way to restart the job. On Heroku, dynos are rebooted at least once a day. If you need persistence and the ability to retry a job in the event a dyno goes down, I'd say switch to one of the other job libraries:
https://github.com/collectiveidea/delayed_job
https://github.com/mperham/sidekiq
https://github.com/resque/resque
However, these require using a Heroku Addon. You can get a way with the free version but you will still have to pay for the extra worker process. Other than that you'd have to implement your own persistence and retrying by wrapping sucker_punch. Here's a discussion on adding those features to sucker_punch: https://github.com/brandonhilkert/sucker_punch/issues/21 They basically say to use Sidekiq instead.

Resources