I have multiple time based triggers in a google sheet (most of which are supposed to run every 15 minutes and one of which is supposed to run once a day). These scripts that the triggers run simply generate random numbers in specific cells (one per each script/trigger). These cells changing then triggers a script that pulls from an API. The issue is that I can only pull so much from the site's API before it locks me out (60/pulls per minute is the limit). The scripts seems to all run at the same time on occasion (including the one thats supposed to only run once a day) which results in me being locked out of the API and receiving no data. Does anyone know why everything would be running at the same time? This happens when the once a day trigger shouldn't be active too.
For this use case you might be better off chaining your functions.
So you would only schedule functionA(), and at the end of it you call functionB(), and so on.
That also allow you to put a Sleep() function between them if your API requires it.
If every function are in the same project that's straightforward, but if they are in different project you will need to publish your projects as libraries.
It's not pretty but it will do the tricky.
Related
I need to put (among other things) the time of the last update of a Google Document (Doc, Sheet, etc.) in a footer, so that if the document is printed it is possible to distinguish which printed copy is the latest. This needs to be a script so it is done automatically for users without requiring them to do anything special. I have a script that does this (at least for Docs), but there are some issues:
In a Doc there is no onEdit trigger, so I can't determine automatically when the document is updated in order to update the footer. (I am aware that I would need to prevent my own update from triggering this by ignoring the change I make)
onOpen can only update the document if the user has edit access to the document. So if somebody opens the document, edits it, and closes the window, and then another user opens it without edit access, they would see the next to last update time instead of the last update time.
The current version of my script must be manually bound to each document when it is created. Is there some way to have it automatically get bound when new documents are created? Would an add-on work?
Is it possible to use the "Detect Changes" API somehow? I'm not sure this is even going to do what I want, and it seems like it might be complicated to do it this way.
Would a time-driven trigger make sense? The only problem with this is that time driven triggers don't seem to be able to run more than once per minute (unless in an add-on, which can only run once per hour), so that last update time could be off by up to this much. This probably wouldn't be a big problem, but could occasionally cause some issues. Also, would running code every minute cause any kind of quota issues?
I tried using ClockTriggerBuilder with a delay of 1000ms, but it wasn't updating. Then I noticed that the after function says it will run the trigger after the specified number of milliseconds plus or minus 15 minutes!
I have a project which has set end (datetime). After this time is reached I want to disable some buttons on a project page and show it is finished.
My idea is to add a boolean flag finished and change it when end reaches time.now.
The question is, is there any better way than run a cron job each minute and watch for projects end datetime, compare it with actual time and when it is over, change the flag?
I am not sure if running cron job each minute is the most efficient way how to do that.
Thanks
If it is just a matter of displaying or not some buttons, this could be done at UI level and you could update your records once a day with a cron job, without running it every minute.
Another approach could be update the records just when an user asks for them. Like:
User requested the specific project
You fetch the project
Before serving the project information you check the end time and updates the record
Your program then serves the freshly update record
But, then again, this depends on the requirements.
And finally, you may join these two ideas. You may have a once|twice a day cron job to update records and also check and update things 'on the fly' as I suggested above.
Maybe you could post some more info to receive a more specific answer.
I'm trying to improve app launch performance for subsequent logins (every login after the first) with my mobile app and after putting some stop watch diagnostics I can see that defining my 8 tables with MobileServiceSQLiteStore.DefineTable<T> takes on average 2.5 seconds. Every time.
On an iPhone 4 running iOS 7 the loading time would be less than a second if it weren't for having to define these tables every time. I would expect them to only need to be defined the first run of the app when the SQLite database is setup. I've tried removing the definitions on subsequent logins and try to just get the sync tables but it fails with "Table is not defined".
So, it seems this is the intended behavior. Can you explain why they need to be defined each time and/or if there is any workaround for this? It could be negligible considering my phone is pretty old now.. but it still is something I would like to remove if possible.
Yes, it is required to be called every time because SDK uses it to know how to deserialize data if you read it via untyped interface i.e. IMobileServiceSyncTable instead of IMobileServiceSyncTable<T>.
As of now there is no work around to avoid calling it each time. I'm surprised however that it is taking 2.5 seconds for you because DefineTable does not do any database operations. It merely inspects the members on your type/JObject and maintains an in memory dictionary for later re-use.
I would recommend you to download and compile the SDK and debug your way through to figure out where the time is actually spent.
I have a Parse Cloud Code function that has to make a https request to another service, and that service may take too long to finish executing to have my function stay within the 15 second timeout. Is there anyway to increase the timeout limit above 15 seconds?
The only cloud code that can exceed 15 seconds is a Job.
One option is to have a Cloud Function that saves information on what you want done to a row, e.g. PendingRequest. You can then have a job that runs every 5 minutes, checking for any records in the PendingRequest class and running them, saving the results, e.g. in another class called CompletedRequest.
If your UI needs to show completion it'll need to poll the CompletedRequest class to see if its request has been complete.
The main issue is that it could be up to 5 minutes before you get any results.
I figured out a way to do that and would love to share. Grab the open source Parse Mobile SDK. Navigate to the ParsePlugins.java file, and search for socketOperationTimeout, change the two places of assignments to this variable to whatever value you like for timeout.
Compile the modified SDK and import to your mobile code.
Like with browser games. User constructs building, and a timer is set for a specific date/time to finish the construction and spawn the building.
I imagined having something like a deamon, but how would that work? To me it seems that spinning + polling is not the way to go. I looked at async_observer, but is that a good fit for something like this?
If you only need the event to be visible to the owning player, then the model can report its updated status on demand and we're done, move along, there's nothing to see here.
If, on the other hand, it needs to be visible to anyone from the time of its scheduled creation, then the problem is a little more interesting.
I'd say you need two things. A queue into which you can put timed events (a database table would do nicely) and a background process, either running continuously or restarted frequently, that pulls events scheduled to occur since the last execution (or those that are imminent, I suppose) and actions them.
Looking at the list of options on the Rails wiki, it appears that there is no One True Solution yet. Let's hope that one of them fits the bill.
I just did exactly this thing for a PBBG I'm working on (Big Villain, you can see the work in progress at MadGamesLab.com). Anyway, I went with a commands table where user commands each generated exactly one entry and an events table with one or more entries per command (linking back to the command). A secondary daemon run using script/runner to get it started polls the event table periodically and runs events whose time has passed.
So far it seems to work quite well, unless I see some problem when I throw large number of users at it, I'm not planning to change it.
To a certian extent it depends on how much logic is on your front end, and how much is in your model. If you know how much time will elapse before something happens you can keep most of the logic on the front end.
I would use your model to determin the state of things, and on a paticular request you can check to see if it is built or not. I don't see why you would need a background worker for this.
I would use AJAX to start a timer (see Periodical Executor) for updating your UI. On the model side, just keep track of the created_at column for your building and only allow it to be used if its construction time has elapsed. That way you don't have to take a trip to your db every few seconds to see if your building is done.