How do I realize repetitive tasks in wolkenkit? - wolkenkit

What would be the best way to implement repetitive tasks in wolkenkit?
Let's say I want to import calendar events on a daily basis or fetch data from a server to update some kind of aggregate. What would be the best practice here?
I thought about setting up a timer somewhere that sends commands on a regular basis so that the aggregate's data can be updated, but I am not quite sure about where to put the timer. After searching a bit online I am unsure if this is something I am not supposed to do at all, to be honest.

Currently there is no built-in mechanism to handle time based scheduling. But you can create a node script that gets data from a server and then uses the client SDK to send commands in order to update aggregates. You can use some kind of scheduling mechanism to run it repetitively, e.g. https://www.npmjs.com/package/node-cron

Related

Async logging of a static queue in Objective-C

I'd like some advice on how to implement the following in Objective C. The actual application in related to an AB testing framework I'm working on, but that context shouldn't matter too much.
I have an IOS application, and when a certain event happens I'd like to send a log message to a HTTP Service endpoint.
I don't want to send the message every time the event happens. Instead I'd prefer to aggregate them, and when it gets to some (configurable) number, I'd like to send them off async.
I'm thinking to wrap up a static NSMutableArray in a class with an add method. That method can check to see if we have reached the configurable max number, if we have, aggregate and send async.
Does objective-c offer any better constructs to store this data? Perhaps one that helps handle concurrency issues? Maybe some kind of message queue?
I've also seen some solutions with dispatching that I'm still trying to get my head around (I'm new).
If the log messages are important, keeping them in memory (array) might not suffice. If the app quits or crashes the NSArray will not persist on subsequent execution.
Instead, you should save them to a database with an 'sync' flag. You can trigger the sync module on every insert to check if the entries with sync flag set to false has reached a threshold and trigger the upload and set sync flag to true for all uploaded records of simply delete the synced records. This also helps you separate your logging module and syncing module and both of them work independently.
You will get a lot of help for syncing SQLite db or CoreData online. Check these links or simply google iOS database sync. If your requirements are not very complex, and you don't mind using third party or open source code, it is always better to go for a readily available solution instead of reinventing the wheel.

Suggestion for trigger that sends email if threshold is broken

This is quite a broad question but ill try and summarise it as best I can.
I have an MVC front end which displays/allows processing of records which are classed as outstanding. I also have a scheduled console app which runs nightly and attempts to resolve each of these records using some logic I wrote.
I have a new requirement, which is to have an email sent every time the total number of outstanding records exceeds a certain amount, this amount needs to be configurable.
The table will contain every record with a flag to say if they have been resolved or not, so I will need to count the outstanding's then fire an email to notify if the threshold is broken.
I initially thought about adding a SQL Server trigger on insert however I soon realised that if no more records were added for a few days but the total number stayed above the threshold because nobody resolved them, then no further email would be sent.
I need the email to send every day on a schedule independently of insert/update.
So now I'm thinking possibly a SQL Server job, or an SSIS package or even a service which runs, but I'm aware this threshold number needs to be configurable.
So what would be the quickest simplest solution to my requirements, I'm open to any suggestion as long as it ticks all the boxes.
Given that the OP already has a console app running on a schedule, the most logical choice would be to simply add this check to the console app along with the email sending logic. It will be much easier to send emails that way, anyways, especially if you employ something like Postal, which will let you use MVC-style views to create your emails.
An SQL Server scheduled job seems to me to be the simplest way to go.
you can add a table to your database that will hold the threshold number and read it's value from there.
In many cases a GeneralParams table is a good thing to have anyway.
The other option you mentioned (windows service) is also configurable in many ways: you can use a GeneralParams table, or the App.Config file of the service (but you will have to restart it every time you change the app.config), or even a simple text file. anything goes. the downside is that it's outside of your sql server, but the upside is that it is probably easier to send emails from.

opening and closing streaming clients for specific durations

I'd like to infrequently open a Twitter streaming connection with TweetStream and listen for new statuses for about an hour.
How should I go about opening the connection, keeping it open for an hour, and then closing it gracefully?
Normally for background processes I would use Resque or Sidekiq, but from my understanding those are for completing tasks as quickly as possible, not chilling and keeping a connection open.
I thought about using a global variable like $twitter_client but that wouldn't horizontally scale.
I also thought about building a second application that runs on one box to handle this functionality, but that seems excessive if it can be integrated into the main app somehow.
To clarify, I have no trouble starting a process, capturing tweets, and using them appropriately. I'm just not sure what I should be starting. A new app? A daemon of some sort?
I've never encountered a problem like this, and am completely lost. Any direction would be much appreciated!
Although not a direct fix, this is what I would look at:
Time
You're working with time, so I'd look at what time-centric processes could be used to induce the connection for an hour
Specifically, I'd look at running a some sort of job on the server, which you could fire at specific times (programmatically if required), to open & close the connection. I only have experience with resque, but as you say, it's probably not up to the job. If I find any better solutions, I'll certainly update the answer
Storage
Once you've connected to TweetStream, you'll want to look at how you can capture the tweets for that time period. It seems a waste to create a data table just for the job, so I'd be inclined to use something like Redis to store the tweets that you need
This can then be used to output the tweets you need, allowing you to simulate storing / capturing them, but then delete them after the hour-window has passed
Delivery
I don't know what context you're using this feature in, so I'll just give you as generic process idea as possible
To display the tweets, I'd personally create some sort of record in the DB to show the time you're pinging TweetStream that day (if it changes; if it's constant, just set a constant in an initializer), and then just include some logic to try and get the tweets from Redis. If you're able to collect them, show them as you wish, else don't print anything
Hope that gives you a broader spectrum of ideas?

Is it possible to string / queue Ruby actions?

I've written a number of actions in a RoR app, that perform different actions within process.
E.g.
- One action communicates with a third party service using their API and collects data.
- Another processes this data and places it into a relevant database.
- Another takes this new data and formats it in a specific way.
etc..
I would like to fire off the process at timed intervals, eg. Each hour. But I don't want to do the whole thing each time.
Sometimes I may just want to do the first two actions. At other times, I might want to do each part of the process.
So have one action run, and then when it's finished call another action. ETC..
The actions could take up to an hour to complete, if not longer, so I need a solution that won't timeout.
What would be the best way to achieve this?
You have quite a few options for processing jobs in the background:
Sidekiq: http://mperham.github.io/sidekiq/
Queue Classic: https://github.com/ryandotsmith/queue_classic
Delayed Job: https://github.com/collectiveidea/delayed_job
Resque: https://github.com/resque/resque
Just read through and pick the one that seems to fit your criteria the best.
EDIT
As you clarified, you want regularly scheduled tasks. Clockwork is a great gem for that (and generally a better option than cron):
https://github.com/tomykaira/clockwork

Letting something happen at a certain time with Rails

Like with browser games. User constructs building, and a timer is set for a specific date/time to finish the construction and spawn the building.
I imagined having something like a deamon, but how would that work? To me it seems that spinning + polling is not the way to go. I looked at async_observer, but is that a good fit for something like this?
If you only need the event to be visible to the owning player, then the model can report its updated status on demand and we're done, move along, there's nothing to see here.
If, on the other hand, it needs to be visible to anyone from the time of its scheduled creation, then the problem is a little more interesting.
I'd say you need two things. A queue into which you can put timed events (a database table would do nicely) and a background process, either running continuously or restarted frequently, that pulls events scheduled to occur since the last execution (or those that are imminent, I suppose) and actions them.
Looking at the list of options on the Rails wiki, it appears that there is no One True Solution yet. Let's hope that one of them fits the bill.
I just did exactly this thing for a PBBG I'm working on (Big Villain, you can see the work in progress at MadGamesLab.com). Anyway, I went with a commands table where user commands each generated exactly one entry and an events table with one or more entries per command (linking back to the command). A secondary daemon run using script/runner to get it started polls the event table periodically and runs events whose time has passed.
So far it seems to work quite well, unless I see some problem when I throw large number of users at it, I'm not planning to change it.
To a certian extent it depends on how much logic is on your front end, and how much is in your model. If you know how much time will elapse before something happens you can keep most of the logic on the front end.
I would use your model to determin the state of things, and on a paticular request you can check to see if it is built or not. I don't see why you would need a background worker for this.
I would use AJAX to start a timer (see Periodical Executor) for updating your UI. On the model side, just keep track of the created_at column for your building and only allow it to be used if its construction time has elapsed. That way you don't have to take a trip to your db every few seconds to see if your building is done.

Resources