delayed_job Won't Process My Queue? - ruby-on-rails

I am using the delayed_jobs gem but using it against 2 queues. I have mapped my models against the correct queues (dbs) to establish correct connections.
The jobs get entered in fine - however, delayed_jobs will process one queue fine but not the other. I am trying to manually force it to process the email queue but it simply won't.
Is there a way to config/force it to? Or pass it the correct backend to process?
See below I am counting jobs - getting a correct count. However, if I try to 'work_off' the queue its showing 0 success/fails.
Pretty sure because its hitting the wrong queue. Any ideas?
Delayed::Worker::Email::Job.count
=> 12032
Delayed::Worker.new(:backend => Email::Job).work_off
=> [0, 0]

I ended up just going with one queue. This seemed to work best and save the headache of juggling two. Would be cool if DJ would eventually support multi-backends/queues.

Related

Ruby on Rails - Best way to asynchronously make long api calls and return values to js on the frontend?

I am creating a search page that displays up to 9 rates. On the frontend, I am sending a request to my rails application that contains the necessary data to grab the 9 rates.
In one of my rails controllers, I crawl a webpage to get the rate. This can take anywhere between 2 and 15 seconds.
I would like run all 9 requests in the background so I may process other requests that come in. For example, the user can make a search and suggested results will display.
I am attempting to use the concurrent-ruby gem with Promises.
The cleaned_params variable is an array of data needed to make the request. There are up to 9 requests data.
Here is what I have so far:
tasks = cleaned_params.map { |request_data|
Concurrent::Promises.future(request_data) { |request_data| api_get_rate(request_data) }
}
# My tasks could still be in the pending state, all_promises is a new promise that will be fulfilled once all fo the inner promises have been fulfilled
all_promises = Concurrent::Promises.zip(*tasks)
# Use all_promises.value! to block - I don't want to render a response until we have the rates.
render json: {:success => true, :status => 200, :rates => all_promises.value! }
Right now I see that all requests to the api_get_rate are being started, but inside my api_get_rate function, I make a call to a method in another class, BetterRateOverride.check_rate. When I run this same code synchronously, I am successfully able to call the above method, but when I run it how I have it setup right now, my code just hangs once it gets to this call. Why is this happening?
Is it not possible to call a method from another class while in a background thread? Do promises run in background threads? I read that Promises run in the ruby global thread pool.
If this is not the best approach, can you steer me in the right direction?
Thanks for any help.
Edit: I think this may be the issue for my code deadlocking:
https://github.com/rails/rails/issues/26847
The conventional Rails approach to this kind of problem would be to implement the long running request as a background job using ActiveJob.
Each rate request would trigger a separate job running in a worker process, and the job would update your job in DB (or Redis) upon completion.
You'd then have another controller which your JS polls to check status / results of individual jobs.
Unless you're a Rails expert, I would recommend against using concurrent-ruby gem together with Rails as it could make things quite complicated.
One common approach is already provided by #fylooi - using ActiveJob to handle background jobs and a JavaScript poller to detect when it's finished. You would have to setup the ActiveJob backend, which is a little bit of work.
Another solution would be to stay completely synchronous in Rails and do the parallelization in JavaScript instead. I.e., you would run multiple AJAX requests in parallel. (Max 6, but this might be enough for your case.)

Sidekiq - how to execute the job immediately (+ does make sense to use a queue in this case)?

I have a task that I need to generate immediately after the request is created and get it done ASAP.
So for this purpose, I have created a /config/sidekiq.yml file where I defined this:
---
:queues:
- default
- [critical, 10]
And for the respective worker, I set this:
class GeneratePDFWorker
include Sidekiq::Worker
sidekiq_options queue: 'critical', retry: false
def perform(order_id)
...
Then, when I call this worker:
GeneratePDFWorker.perform_async(#order.id)
So I am testing this. But - I found this post, where is said that if I want to execute the tasks immediately, I should call:
GeneratePDFWorker.new.perform(#order.id)
So my question is - should I use the combination of a (critical) queue + the new (GeneratePDFWorker.new.perform) method? Does it make sense?
Also, how can I verify that the tasks is execute as critical?
Thank you
So my question is - should I use the combination of a (critical) queue + the new (GeneratePDFWorker.new.perform) method? Does it make sense?
Using GeneratePDFWorker.new.perform will run the code right there and then, like normal, inline code (in a blocking manner, not async). You can't define a queue, because it's not being queued.
As Walking Wiki mentioned, GeneratePDFWorker.new.perform(#order.id) will call the worker synchronously. So if you did this from a controller action, the request would block until the perform method completed.
I think your approach of using priority queues for critical tasks with Sidekiq is the way to go. As long as you have enough Sidekiq workers, and your queue isn't backlogged, the task should run almost immediately so the benefit of running your worker in-process is pretty much nil. So I'd say yes, it does make sense to queue in this case.
Also, you're probably aware of this, but sidekiq has a great monitoring UI: https://github.com/mperham/sidekiq/wiki/Monitoring. This should should make it easy to get reliable, detailed metrics on the performance of your workers.
should I use the combination of a (critical) queue?
Me:
Yes you can use critical queue if you feel so. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1.
Tips:
Keep the number of queues fewer as possible. Sidekiq is not designed to handler tremendous number of queues.
Also keep weights as simple as possible. If you want queues always processed in a specific order, just declare them in order without weights.
the new (GeneratePDFWorker.new.perform) method?
Me: No, using sidekiq in the same thread asynchronously is bad in the first place. This will hamper your application's performance as your application-server will be busy for longer. This will be very expensive for you. Then what will be the point of using sidekiq?

Sidekiq handling re-queue when processing large data

See the updated question below.
Original question:
In my current Rails project, I need to parse large xml/csv data file and save it into mongodb.
Right now I use this steps:
Receive uploaded file from user, store the data into mongodb
Use sidekiq to perform async process of the data in mongodb.
After process finished, delete the raw data.
For small and medium data in localhost, the steps above run well. But in heroku, I use hirefire to dynamically scale worker dyno up and down. When worker still processing the large data, hirefire see empty queue and scale down worker dyno. This send kill signal to the process, and leave the process in incomplete state.
I'm searching a better way to do the parsing, allow the parsing process got killed anytime (saving the current state when receiving kill signal), and allow the process got re-queued.
Right now I'm using Model.delay.parse_file and it don't get re-queued.
UPDATE
After reading sidekiq wiki, I found article about job control. Can anyone explain the code, how it works, and how it preserve it's state when receiving SIGTERM signal and the worker get re-queued?
Is there any alternative way to handle job termination, save current state, and continue right from the last position?
Thanks,
Might be easier to explain the process and the high level steps, give a sample implementation (a stripped down version of one that I use), and then talk about throw and catch:
Insert the raw csv rows with an incrementing index (to be able to resume from a specific row/index later)
Process the CSV stopping every 'chunk' to check if the job is done by checking if Sidekiq::Fetcher.done? returns true
When the fetcher is done?, store the index of the currently processed item on the user and return so that the job completes and control is returned to sidekiq.
Note that if a job is still running after a short timeout (default 20s) the job will be killed.
Then when the job runs again simply, start where you left off last time (or at 0)
Example:
class UserCSVImportWorker
include Sidekiq::Worker
def perform(user_id)
user = User.find(user_id)
items = user.raw_csv_items.where(:index => {'$gte' => user.last_csv_index.to_i})
items.each_with_index do |item, i|
if (i+1 % 100) == 0 && Sidekiq::Fetcher.done?
user.update(last_csv_index: item.index)
return
end
# Process the item as normal
end
end
end
The above class makes sure that each 100 items we check that the fetcher is not done (a proxy for if shutdown has been started), and ends execution of the job. Before the execution ends however we update the user with the last index that has been processed so that we can start where we left off next time.
throw catch is a way to implement this above functionality a little cleaner (maybe) but is a little like using Fibers, nice concept but hard to wrap your head around. Technically throw catch is more like goto than most people are generally comfortable with.
edit
Also you could not make call to Sidekiq::Fetcher.done? and record the last_csv_index on each row or on each chunk of rows processed, that way if your worker is killed without having the opportunity to record the last_csv_index you can still resume 'close' to where you left off.
You are trying to address the concept of idempotency, the idea that processing a thing multiple times with potential incomplete cycles does not cause problems. (https://github.com/mperham/sidekiq/wiki/Best-Practices#2-make-your-jobs-idempotent-and-transactional)
Possible steps forward
Split the file up into parts and process those parts with a job per part.
Lift the threshold for hirefire so that it will scale when jobs are likely to have fully completed (10 minutes)
Don't allow hirefire to scale down while a job is working (set a redis key on start and clear on completion)
Track progress of the job as it is processing and pick up where you left off if the job is killed.

Moving a Resque job between queues

Is there anyway to move a resque job between two different queues?
We sometimes get in the situation that we have a big queue and a job that is near the end we find a need to "bump up its priority." We thought it might be an easy way to simply move it to another queue that had a worker waiting for any high priority jobs.
This happens rarely and is usually a case where we get a special call from a customer, so scaling, re-engineering don't seem totally necessary.
There is nothing built-in in Resque. You can use rpoplpush like:
module Resque
def self.move_queue(source, destination)
r = Resque.redis
r.llen("queue:#{source}").times do
r.rpoplpush("queue:#{source}", "queue:#{destination}")
end
end
end
https://gist.github.com/rafaelbandeira3/7088498
If it's a rare occurrence you're probably better off just manually pushing a new job into a shorter queue. You'll want to make sure that your system has a way to identify that the job has already run and to bail out so that when the job in the long queue is finally reached it is not processed again (if double processing is a problem for you).

Resque.. how can I get a list of the queues

Ok.. On heroku I have up 24 workers (as I understand it)
I have say 1000 clients. Each with their own "schema" in a postgresql database.
each client has tasks that can be done "later".. sending orders to my companies back end, is a great example.
I was thinking that I could create a new queue for each client, and each queue would have it's own worker(process). That it seems isn't in the cards.
So ok.. my thinking now is to have a queue field in client record..
so client 1 through 15 are in queue_a
and client 16 through 106 are in queue_b.. ect If one client is using heaps, we could move them to a new queue, or move others out of the slow Queue. clients with low volumns could be collected.. It would be a balancing act, but it wouldn't be all that hard to manage, if we kept track of metrics (which we will anyway)
(any counter ideas would be awesome to hear, I'm really in spit ball phase)
Right now, though. I'd like to figure out how to create a worker for each queue.
https://gist.github.com/486161 tells me how to create X workers, but doesn't really let me set a worker to a Queue. If I knew that, and how to get a list of queues, I think I'd be on my way to a viable solution to the limits.
Reading onhttp://blog.winfieldpeterson.com/2012/02/17/resque-queue-priority/
I realize that my plan is fraught with hardship.. The first client/queue to get added to the worker, would get priority.. I don't want that, I'd want them to all have the same. As long as they are part of the same queue..
i just stick to the topic :)
getting all queues in resque is pretty easy
Resque.queues
is a list of all queue names, it does not include the 'failed' queue, i did something like this
(['failed'] + Resque.queues).each do |queue|
queue_size = queue=='failed' ? Resque::Failure.count : Resque.size(queue)
end

Resources