We have a wcf service (hosted in windows service) that does some processing which can take several minutes to complete (1 to 5 min or sometimes more)
We need to poll a database periodically (every 2 min) to fetch the items to be processed and call this service for each item. I’m thinking of using Quartz.Net for this use case.
Create a quartz job that will run every 2 min and get the items to be processed from DB.
For each record, create a job and schedule the job to be executed immediately.
We want to limit the maximum number of concurrent processing (say, 100 max) to avoid memory issues.
To do this, one of the options is to get only as much records from DB as that can be processed
ie if 60 jobs are currently executing, we need to get only 40 records from DB.
This requires us to know how many jobs are currently executing before getting items from DB.
When the first job fires to get the records from DB, is it possible to know how many processes are currently executing?
There’s a method IScheduler.GetCurrentlyExecutingJobs which I think can be used to get the count of active jobs.
The processing service is a one-way method, how will Quartz know if the processing is still active or completed?
Related
When dataflow streaming job with autoscaling enabled is deployed, it uses single worker.
Let's assume that pipeline reads pubsub messages, does some DoFn operations and uploads into BQ.
Let's also assume that PubSub queue is already a bit big.
So pipeline get started and loads some pubsubs processing them on single worker.
After couple of minutes it gets realized that some extra workers are needed and creates them.
Many pubsub messages are already loaded and are being processed but not acked yet.
And here is my question: how dataflow will manage those unacked yet, being processed elements?
My observations would suggest that dataflow sends many of those already being processed messages to a newly created worker and we can see that the same element is being processed at the same time on two workers.
Is this expected behavior?
Another question is - what next? First wins? Or new wins?
I mean, we have the same pubsub message that is still being processed on first worker and on the new one.
What if process on first worker will be faster and finishes processing? It will be acked and goes downstream or will be drop because new process for this element is on and only new one can be finalized?
Dataflow provides exactly-once processing of every record. Funnily enough, this does not mean that user code is run only once per record, whether by the streaming or batch runner.
It might run a given record through a user transform multiple times, or it might even run the same record simultaneously on multiple workers; this is necessary to guarantee at-least once processing in the face of worker failures. Only one of these invocations can “win” and produce output further down the pipeline.
More information here - https://cloud.google.com/blog/products/data-analytics/after-lambda-exactly-once-processing-in-google-cloud-dataflow-part-1
I have one doubt with enqueuing the job using sucker punch.
I have 2000+ search keywords in my database I want to know the google and bing ranking for each keyword in my database. For this I'm using Authority Labs API. But AuthorityLabs will only process 1000 POST request in 1 hour. I'm sending each request to AuthorityLab as a background job using sucker punch. How can I limit only 1000 jobs will run in 1 hour, remaining jobs only start after one hour. Also I want to run this jobs daily for analysing the rank change.
Rate limiting is not a concern of your queue system, much less of SuckerPunch that is not designed to handle advanced delaying/queuing stuff, it just moves asynchronous jobs to a thread from a thread pool.
If you really want to have rate limiting, use a real queue system like Sidekiq, and put some actual code to work.
Sidekiq Enterprise supports it natively: https://github.com/mperham/sidekiq/wiki/Ent-Rate-Limiting
Sidekiq-throttler seems to provide the same functionality: https://github.com/gevans/sidekiq-throttler
But you can also just delay execution (so pre-emptively limiting the rate), by enqueuing jobs at specific times in the future (each executing 4 minutes after the other) or enqueuing just one job that executes itself (doing next outstanding request) and enqueues itself again with 4 minutes delay.
As always with open source, check the code and decide by yourself.
Could you do something like this?
YourProcessingJob.set(wait: 1.hours).perform_later
Possibly in a custom rake task...
I'm migrating from Delayed_jobs to Resque and I have difficulties finding the best way to handle those cases:
A user can NOT add twice the same command to the list of jobs (e.g. "export all my data"). Only one export command at a time. For other it's fine to have many (e.g. send emails)
Some jobs should not run for more than 5 minutes, while other are allowed to run for 30 minutes. In both cases, I'd like to have a time-out in case process is blocked or is not completed on time.
Can add jobs to start in a few days
Inform the user on all their current & future jobs.
Can cancel some jobs (current and future) for the user
Keep ability to have different lists (mostly for priorities / slow and fast tasks)
I looked at resque-status and it seems like it provides the low level query, but I would still need to do my per user job management.
Suggestions on best way to handle this?
I have this project which still uses delayed job as processing job queue. I've recently found an edge case which is making me question a few things: I have this AR (I'm using MySQL, by the way) object, which on update sends a message to all the elements of an has_many association. In order to do that, I have to instantiate all the elements of this association an call the message on them. It seemed only fair enough to delay the call of this message for each one of them.
Now the association has grown quite a bit, where in an edge case I have 40000 objects belonging to that association. The message sending thereby now involves the (synchronous) creation of 40000 delayed-job jobs. Since these happen inside an after update callback an not after commit, they are thereby (ab)using the same connection, not taking advantage of any context-switching. Short version, I have a pipe of 1 Update statement and 40000 Inserts on the same connection. This update is gobbling quite a few minutes in production, for that reason.
Now, there are a lot of ways around this: Change the callback to an after commit, creating 1 (synchronous) delayed job which will create 40000 jobs (I don't want to handle the 40000 (AR) objects in one job, the 40000 now will be 120000 tomorrow, and that's memory-armageddon), etc etc...
But what I'm really considering is switching my delayed processing queue to resque or sidekiq. They use redis, so write performance is far better. They use something rather than MySQL, which means the connections will not block each other. My only issue is: how much would 40000 writes at once to redis cost me? And: does any one of these options first store the jobs in memory, not blocking the response to the client and belatedly stores them in redis? So, my real question is: how much would this delaying delay me in such an edge case?
Indeed, Redis can process writes faster than MySQL. Try running redis-benchmark, you'll see figures of 100k+ writes/sec.
does any one of these options first store the jobs in memory, not blocking the response to the client and belatedly stores them in redis?
No, they do it synchronously.
I don't want to handle the 40000 (AR) objects in one job
Maybe you should try hybrid approach: process chunks of N objects per job. Batch writes should be faster than 40k individual writes. And it scales well (batch size will stay the same, be it 40k or 400k items).
I need to process files which get uploaded and it can take as little as 1 second or as much as 10 minutes. Currently my solution is to make a quartz job with a timer of 30 seconds and then process and arbitrary job whenever it hits. There are several problems with this.
One: if the job will take less than a few seconds it is wasteful to make things wait 30 seconds for the job queue.
Two: if there is only one long job in the queue it could feasibly try to do it twice.
What I want is a timeless queue. When things are added the are started immediately if there is a free worker. Is there a solution for this? I was looking at jesque, but I couldn't tell if it can do this.
What you are looking for is a basic message queue. There are lots of options out there, but my favorite for Grails is RabbitMQ. The Grails plugin for it is quite good and it performs well in my experience.
In general, message queues allow you to have N producers (things creating jobs") adding work messages to a queue and then M consumers pulling jobs off of the queue and processing them. When a worker completes it's job, it simply asks the queue for the next job to process and if there is none, it just waits for the queue to give it something to do. The queue also keeps track of success / failure of message processing (you can control this) so that you don't give the same message to more than one worker.
This has the advantage of not relying on polling (so you can start processing as soon as things come in) and it's also much more scaleable. You can scale both your producers and consumers up or down as needed, decoupling the inputs from the outputs so that you can take a traffic spike and then work your way through it as you have the resources (workers) available.
To solve problem one just make the job check for new uploaded files every 5 seconds (or 3 seconds, or 1 second). If the check for uploaded files is quick then there is no reason you can't run it often.
For problem two you just need to record when you start processing a file to ensure it doesn't get picked-up twice. You could create a table in the database, or store the information in memory somewhere.