using a producer / consumer pattern with Quartz.net - quartz.net

Till today I used Quartz.net for a single job or different jobs that worked independently.. now I need to run 2 different jobs based on some data I've. I've created 3 jobs for now and called them one after the other (am doing ugly things I won't write here to run them one after the other...)
I was wondering if I use a queue, shared between the jobs, (or even 3 background worker that works on different queue) can I have this working with Quartz.net?
I mean I have the main job run, the other works on their on queue, but the main job is not scheduled until the worker is finished?
I've just put on the main job the [DisallowConcurrentExecution] attribute
Thanks

Related

How to Chain Rails ActiveJobs

I am looking for a way to be able run Active Job serially. Ideally, a long running Job 1 is scheduled to run at a certain time. A similarly running Job 2 is slated to run only after Job 1 completes. Job 3 then waits for Job 2 to run to completion before it starts and so on.
I have to admit that I am rather new to background jobs in Rails but I am already using Active Job with Sidekiq as the job runner for simple fire-and-forget tasks.
I like Active Job because it provides a simple enough interface to dive almost immediately into background jobs processing. I can use Sidekiq without having to define workers, for example.
For reference, I have achieved something similar but it was on .NET using the excellent Hangfire library which has continuations where you pass the ID of a parent job ensuring that the job will run only after the parent job has successfully completed.
It would be nice to have something as clean and simple as that using Sidekiq and Active Job but really any alternative ways to achieve the same thing are welcome. It doesn't have to be Sidekiq and Active Job.
The most straightforward way to to this is to call a third job from within a second job, and the second one from within a first job

How can I configure Delayed jobs to not wait for a task before starting the others?

I am using Delayed jobs for my Ruby app hosted in Heroku to perform a very long task that can take up to 5 minutes.
I've noticed that, in development mode at least, when this task is running the ones that come afterwards are not started until that one finishes. I would like other tasks to be able to start running without having to wait for the other to finish (to have at least 3 concurrent tasks, for example).
I don't wish to increase the number of workers in Heroku ($$$).
I noticed the 'pool' param in delayed jobs but I don't fully understand if this is what I need or how to use it.
https://github.com/collectiveidea/delayed_job/blob/master/README.md
I achieved it using threads in the task code, but maybe this is not the best way to do it.
If you could tell me exactly how I could achieve concurrency in delayed jobs I would really appreciate it.
A DJ worker only runs a single job at a time. If you want concurrent processing of your background jobs, you'll need multiple background workers.
You are way better off implementing sidekiq.

How to correctly use Resque workers?

I have the following tasks to do in a rails application:
Download a video
Trim the video with FFMPEG between a given duration (Eg.: 00:02 - 00:09)
Convert the video to a given format
Move the converted video to a folder
Since I wanted to make this happen in background jobs, I used 1 resque worker that processes a queue.
For the first job, I have created a queue like this
#queue = :download_video that does it's task, and at the end of the task I am going forward to the next task by calling Resque.enqueue(ConvertVideo, name, itemId). In this way, I have created a chain of queues that are enqueued when one task is finished.
This is very wrong, since if the first job starts to enqueue the other jobs (one from another), then everything get's blocked with 1 worker until the first list of queued jobs is finished.
How should this be optimised? I tried adding more workers to this way of enqueueing jobs, but the results are wrong and unpredictable.
Another aspect is that each job is saving a status in the database and I need the jobs to be processed in the right order.
Should each worker do a single job from above and have at least 4 workers? If I double the amount to 8 workers, would it be an improvement?
Have you considered using sidekiq ?
As said in Sidekiq documentation :
resque uses redis for storage and processes messages in a single-threaded process. The redis requirement makes it a little more difficult to set up, compared to delayed_job, but redis is far better as a queue than a SQL database. Being single-threaded means that processing 20 jobs in parallel requires 20 processes, which can take a lot of memory.
sidekiq uses redis for storage and processes jobs in a multi-threaded process. It's just as easy to set up as resque but more efficient in terms of raw processing speed. Your worker code does need to be thread-safe.
So you should have two kind of jobs : download videos and convert videos and any download video job should be done in parallel (you can limit that if you want) and then each stored in one queue (the "in-between queue") before being converted by multiple convert jobs in parallel.
I hope that helps, this link explains quite well the best practices in Sidekiq : https://github.com/mperham/sidekiq/wiki/Best-Practices
As #Ghislaindj noted Sidekiq might be an alternative - largely because it offers plugins that control execution ordering.
See this list:
https://github.com/mperham/sidekiq/wiki/Related-Projects#execution-ordering
Nonetheless, yes, you should be using different queues and more workers which are specific to the queue. So you have a set of workers all working on the :download_video queue and then you other workers attached to the :convert_video queue, etc.
If you want to continue using Resque another approach would be to use delayed execution, so when you enqueue your subsequent jobs you specify a delay parameter.
Resque.enqueue_in(10.seconds, ConvertVideo, name, itemId)
The down-side to using delayed execution in Resque is that it requires the resque-scheduler package, so you're introducing a new dependency:
https://github.com/resque/resque-scheduler
For comparison Sidekiq has delayed execution natively available.
Have you considered merging all four tasks into just one? In this case you can have any number of workers, one will do the job. It will work very predictable, you can even know how much time will take to finish the task. You also don't have problems when one of the subtasks takes longer than all others and it piles up in the queue.

How do I create more than one instances of the Quartz.Net scheduler?

Is it safe to run multiple instances of the quartz.net scheduler?
If so, how do I do it?
You can use quartz_jobs.xml to configure jobs and create StatefulJobs and use job chaining for running jobs sequentially in one thread scheduler(pointing to RAMJobStore); another scheduler pointing to data store can run simultaneously
http://quartz-scheduler.org/documentation/faq#FAQ-chain
If you need to persist all jobs to single database, you can use 2 schedulers with clustering but you won't get to choose which job runs on which scheduler, so your jobs will run sequentially but may not run on single thread scheduler. 2schedulers can be run, if having 2 quartz table sets with different prefix is not an issue.
http://quartz-scheduler.org/documentation/quartz-1.x/cookbook/MultipleSchedulers

rails backgroundjob running jobs in parallel?

I'm very happy with By so far, only I have this one issue:
When one process takes 1 or 2 hours to complete, all other jobs in the queue seem to wait for that one job to finish. Worse still is when uploading to a server which time's out regularly.
My question: is Bj running jobs in parallel or one after another?
Thank you,
Damir
BackgroundJob will only allow one worker to run per webserver instance. This is by design to keep things simple. Here is a quote from Bj's README:
If one ignores platform specific details the design of Bj is quite simple: the
main Rails application submits jobs to table, stored in the database. The act
of submitting triggers exactly one of two things to occur:
1) a new long running background runner to be started
2) an existing background runner to be signaled
The background runner refuses to run two copies of itself for a given
hostname/rails_env combination. For example you may only have one background
runner processing jobs on localhost in development mode.
The background runner, under normal circumstances, is managed by Bj itself -
you need do nothing to start, monitor, or stop it - it just works. However,
some people will prefer manage their own background process, see 'External
Runner' section below for more on this.
The runner simply processes each job in a highest priority oldest-in fashion,
capturing stdout, stderr, exit_status, etc. and storing the information back
into the database while logging it's actions. When there are no jobs to run
the runner goes to sleep for 42 seconds; however this sleep is interuptable,
such as when the runner is signaled that a new job has been submitted so,
under normal circumstances there will be zero lag between job submission and
job running for an empty queue.
You can learn more on the github page: Here

Resources