Multiple instances of a Quarts job to emulate timer functionality - grails

I have a Grails application and need to set a timer, something that will initiate a broadcast via WebSocket at a given time, with the following stipulations:
A timer can be postponed or cancelled
There might be several different timers running at the same time (but with different "contexts")
Clustered mode should be supported, i.e. a timer fires only once regardless of the number of instances of my app in a cluster.
The solution I have come up with is:
Use Quartz to create an ordinary job without any given date for when it should be fired
The moment I want to set a timer, I create a new instance of the job with a cronExpression to fire it at a given time, and then save it persistently
Should I need to postpone the timer, I fetch it from the DB and rewrite the cronExpression to a new value instead.
My concerns are:
Is there any other way I can "set a timer" in Grails, possibly without using the concept of Quartz jobs?
It is possible to have multiple instances of a Quartz job, but is it the right way to use Quartz, or should it be avoided? Maybe I should use custom triggers instead?
If I go the way I explained before, is it going to work in a clustered mode (multiple instances)?

Related

How do I realize repetitive tasks in wolkenkit?

What would be the best way to implement repetitive tasks in wolkenkit?
Let's say I want to import calendar events on a daily basis or fetch data from a server to update some kind of aggregate. What would be the best practice here?
I thought about setting up a timer somewhere that sends commands on a regular basis so that the aggregate's data can be updated, but I am not quite sure about where to put the timer. After searching a bit online I am unsure if this is something I am not supposed to do at all, to be honest.
Currently there is no built-in mechanism to handle time based scheduling. But you can create a node script that gets data from a server and then uses the client SDK to send commands in order to update aggregates. You can use some kind of scheduling mechanism to run it repetitively, e.g. https://www.npmjs.com/package/node-cron

How to reflect process on before_update?

I have a project in which when I try to update some attribute, a long and exhausting before_update function runs. This function runs some scripts, and when they're finished successfully the attribute is changed.
The problem is that I want a way to reflect to current status of the currently running scripts (to display some sort of 2/5...3/5... process), but I can't figure out a solution. I tried saving the last running command in the DB, but because the scripts are running in a before_update scope the commit is done only after all script are finished.
Is there any elegant solution to this kind of problem?
In general, you should avoid running expensive, cross-cutting code in callbacks. A time will come when you want to update one of those records without running that code, and then you'll start adding flags to determine when that callback should run, and all sorts of other nastiness. Also, if the record is being updated during a request, the expensive callback code will slow the whole request down, and potentially time out and/or block other visitors from accessing your application.
The way to architect this would be to create the record first (perhaps with a flag/state that tells the rest of your app that the update hasn't been "processed" yet - meaning that related code currently in your callback hasn't run yet). Then, you'd enqueue a background job that does whatever is in your callback. If you are using Sidekiq, you can use the sidekiq-status gem to update the job's status as it's running.
You'd then add a controller/action that checks up on the job's status and returns it in JSON, and some JS that pings that action every few seconds to check up on the status of the job and update your interface accordingly.
Even if you didn't want to update your users on the status of the job, a background job would probably still be in order here - especially if that code is very expensive, or involves third-party API calls. If not, it likely belongs in the controller, and you could run it all in a transaction. But if you need to update your users on the status of that work, a background job is the way to go.

C# 5 .NET MVC long async task, progress report and cancel globally

I use ASP.Net MVC 5 and I have a long running action which have to poll webservices, process data and store them in database.
For that I want to use TPL library to start the task async.
But I wonder how to do 3 things :
I want to report progress of this task. For this I think about SignalR
I want to be able to left the page where I start this task from and be able to report the progression across the website (from a panel on the left but this is ok)
And I want to be able to cancel this task globally (from my panel on the left)
I know quite a few about all of technologies involved. But I'm not sure about the best way to achieve this.
Is someone can help me about the best solution ?
The fact that you want to run long running work while the user can navigate away from the page that initiates the work means that you need to run this work "in the background". It cannot be performed as part of a regular HTTP request because the user might cancel his request at any time by navigating away or closing the browser. In fact this seems to be a key scenario for you.
Background work in ASP.NET is dangerous. You can certainly pull it off but it is not easy to get right. Also, worker processes can exit for many reasons (app pool recycle, deployment, machine reboot, machine failure, Stack Overflow or OOM exception on an unrelated thread). So make sure your long-running work tolerates being aborted mid-way. You can reduce the likelyhood that this happens but never exclude the possibility.
You can make your code safe in the face of arbitrary termination by wrapping all work in a transaction. This of course only works if you don't cause non-transacted side-effects like web-service calls that change state. It is not possible to give a general answer here because achieving safety in the presence of arbitrary termination depends highly on the concrete work to be done.
Here's a possible architecture that I have used in the past:
When a job comes in you write all necessary input data to a database table and report success to the client.
You need a way to start a worker to work on that job. You could start a task immediately for that. You also need a periodic check that looks for unstarted work in case the app exits after having added the work item but before starting a task for it. Have the Windows task scheduler call a secret URL in your app once per minute that does this.
When you start working on a job you mark that job as running so that it is not accidentally picked up a second time. Work on that job, write the results and mark it as done. All in a single transaction. When your process happens to exit mid-way the database will reset all data involved.
Write job progress to a separate table row on a separate connection and separate transaction. The browser can poll the server for progress information. You could also use SignalR but I don't have experience with that and I expect it would be hard to get it to resume progress reporting in the presence of arbitrary termination.
Cancellation would be done by setting a cancel flag in the progress information row. The app needs to poll that flag.
Maybe you can make use of message queueing for job processing but I'm always wary to use it. To process a message in a transacted way you need MSDTC which is unsupported with many high-availability solutions for SQL Server.
You might think that this architecture is not very sophisticated. It makes use of polling for lots of things. Polling is a primitive technique but it works quite well. It is reliable and well-understood. It has a simple concurrency model.
If you can assume that your application never exits at inopportune times the architecture would be much simpler. But this cannot be assumed. You cannot assume that there will be no deployments during work hours and that there will be no bugs leading to crashes.
Even if using http worker is a bad thing to run long task I have made a small example of how to manage it with SignalR :
Inside this example you can :
Start a task
See task progression
Cancel task
It's based on :
twitter bootstrap
knockoutjs
signalR
C# 5.0 async/await with CancelToken and IProgress
You can find the source of this example here :
https://github.com/dragouf/SignalR.Progress

Is it possible to string / queue Ruby actions?

I've written a number of actions in a RoR app, that perform different actions within process.
E.g.
- One action communicates with a third party service using their API and collects data.
- Another processes this data and places it into a relevant database.
- Another takes this new data and formats it in a specific way.
etc..
I would like to fire off the process at timed intervals, eg. Each hour. But I don't want to do the whole thing each time.
Sometimes I may just want to do the first two actions. At other times, I might want to do each part of the process.
So have one action run, and then when it's finished call another action. ETC..
The actions could take up to an hour to complete, if not longer, so I need a solution that won't timeout.
What would be the best way to achieve this?
You have quite a few options for processing jobs in the background:
Sidekiq: http://mperham.github.io/sidekiq/
Queue Classic: https://github.com/ryandotsmith/queue_classic
Delayed Job: https://github.com/collectiveidea/delayed_job
Resque: https://github.com/resque/resque
Just read through and pick the one that seems to fit your criteria the best.
EDIT
As you clarified, you want regularly scheduled tasks. Clockwork is a great gem for that (and generally a better option than cron):
https://github.com/tomykaira/clockwork

Letting something happen at a certain time with Rails

Like with browser games. User constructs building, and a timer is set for a specific date/time to finish the construction and spawn the building.
I imagined having something like a deamon, but how would that work? To me it seems that spinning + polling is not the way to go. I looked at async_observer, but is that a good fit for something like this?
If you only need the event to be visible to the owning player, then the model can report its updated status on demand and we're done, move along, there's nothing to see here.
If, on the other hand, it needs to be visible to anyone from the time of its scheduled creation, then the problem is a little more interesting.
I'd say you need two things. A queue into which you can put timed events (a database table would do nicely) and a background process, either running continuously or restarted frequently, that pulls events scheduled to occur since the last execution (or those that are imminent, I suppose) and actions them.
Looking at the list of options on the Rails wiki, it appears that there is no One True Solution yet. Let's hope that one of them fits the bill.
I just did exactly this thing for a PBBG I'm working on (Big Villain, you can see the work in progress at MadGamesLab.com). Anyway, I went with a commands table where user commands each generated exactly one entry and an events table with one or more entries per command (linking back to the command). A secondary daemon run using script/runner to get it started polls the event table periodically and runs events whose time has passed.
So far it seems to work quite well, unless I see some problem when I throw large number of users at it, I'm not planning to change it.
To a certian extent it depends on how much logic is on your front end, and how much is in your model. If you know how much time will elapse before something happens you can keep most of the logic on the front end.
I would use your model to determin the state of things, and on a paticular request you can check to see if it is built or not. I don't see why you would need a background worker for this.
I would use AJAX to start a timer (see Periodical Executor) for updating your UI. On the model side, just keep track of the created_at column for your building and only allow it to be used if its construction time has elapsed. That way you don't have to take a trip to your db every few seconds to see if your building is done.

Resources