We are using quartz.net to schedule thousands of jobs. In case some of the jobs do not fire (due to non-availability of threads and misfire threshold setting), is there a way we can send a notification with the list of jobs that didn't fire?
Thanks.
You can create a class that implements ITriggerListener and the scheduler will notify you if any trigger misfires by calling the TriggerMisfired method.
Related
How can I manage to execute job after the first job that has executed is done in sidekiq. For example:
I triggered the first job for this morning
GoodWorker.perform_async(params) #=> JID-eetc
while it is still in progress I've executed again a job in the same worker dynamically
GoodWorker.perform_ascyn(params) #=> JID-eetc2
and etc.
What's going on now is Sidekiq processing the jobs all of the time,
is there a way performing the job one at a time?
Short answer: no.
Long answer: You can use a mutex to guarantee that only one instance of a worker is executing at a time. If you're running on a cluster, you'll need to use Redis or some other medium to maintain the mutex. Otherwise, you might try putting these jobs in their own queue, and firing up a separate instance of Sidekiq that only monitors that queue, with a concurrency of one.
Can you not setup Sidekiq to only have one thread? Then only one job will be executed at a time.
What is the best approach to make sure specific background jobs (DelayedJob, or Resque) are executed sequentially, instead of in parallel? I guess one option is to have a dedicated queue and assign one worker only to the queue. Is there a better approach?
I did some background job functions using ActiveJob & Resque before.
I think do it by checking and setting status(e.g. pending, in-progress ..)
for each job.
And passing it for background jobs one by one.
I am using Resque to enqueue jobs.
I start a worker and the jobs are processed.
My jobs extend a gem that implements job hooks like before_enqueue, after_enqueue, before_perform, after_perform and sends stuff to statsd. Those work. However, before_dequeue and after_dequeue do not seem to be called. Is there a reason why?
Also, my understanding of Resque isn't all quite there. I would call Resque.enqueue to queue up a job class, and then if I start a Resque worker, it will automatically pop a task from the queue and then perform the task. Where does dequeue come into play? I notice that dequeue destroys the the task, when does the dequeue step happen in the Resque worker workflow?
I want to hook into after_dequeue because I want to log the time that a task stays in the queue, so I need to hook into before_enqueue and after_dequeue.
So dequeue is used by the client to manually dequeue jobs from Redis/Resque. To calculate the time a job spends in the queue, I will have to capture the time in after_enqueue and before_perform. When Resque pops a job off the queue, there is no hook that we can hook into.
Configuration
Quartz.net 2.0.1
JobStore: SqlServer
Statefull jobs are running inside a windows service.
There is also a console application that allow firing the same jobs inside service.
I want to initialize correctly quartz scheduler so that it will respect non-concurrent job execution while allowing to fire a certain job immediately from console.
As long as you mark your job with the DisallowConcurrentExecutionAttribute attribute, you can schedule your jobs from any source you want and the scheduler will make sure only one instance runs at a time.
I am looking into Quartz.NET to process our daily jobs and it seems to handle our scenarios. We also have jobs that are added to a queue table in SqlServer and this table is polled for work every few seconds. How can this be handled using Quartz.NET?
Quartz.net will replace the queue/polling functionality. Quartz.net has its own queue (jobstore) which it polls and executes jobs from it.
What you will do is schedule your jobs in quartz (queue them) and then Quartz.Net will execute them based on the time they need to be executed (determined by the trigger). In Quartz.Net the job (does the work) is separate from the trigger (determines when I job should run).