Increase Parse Cloud Code Function timeout time - timeout

I have a Parse Cloud Code function that has to make a https request to another service, and that service may take too long to finish executing to have my function stay within the 15 second timeout. Is there anyway to increase the timeout limit above 15 seconds?

The only cloud code that can exceed 15 seconds is a Job.
One option is to have a Cloud Function that saves information on what you want done to a row, e.g. PendingRequest. You can then have a job that runs every 5 minutes, checking for any records in the PendingRequest class and running them, saving the results, e.g. in another class called CompletedRequest.
If your UI needs to show completion it'll need to poll the CompletedRequest class to see if its request has been complete.
The main issue is that it could be up to 5 minutes before you get any results.

I figured out a way to do that and would love to share. Grab the open source Parse Mobile SDK. Navigate to the ParsePlugins.java file, and search for socketOperationTimeout, change the two places of assignments to this variable to whatever value you like for timeout.
Compile the modified SDK and import to your mobile code.

Related

All time based triggers seem to run at the same time

I have multiple time based triggers in a google sheet (most of which are supposed to run every 15 minutes and one of which is supposed to run once a day). These scripts that the triggers run simply generate random numbers in specific cells (one per each script/trigger). These cells changing then triggers a script that pulls from an API. The issue is that I can only pull so much from the site's API before it locks me out (60/pulls per minute is the limit). The scripts seems to all run at the same time on occasion (including the one thats supposed to only run once a day) which results in me being locked out of the API and receiving no data. Does anyone know why everything would be running at the same time? This happens when the once a day trigger shouldn't be active too.
For this use case you might be better off chaining your functions.
So you would only schedule functionA(), and at the end of it you call functionB(), and so on.
That also allow you to put a Sleep() function between them if your API requires it.
If every function are in the same project that's straightforward, but if they are in different project you will need to publish your projects as libraries.
It's not pretty but it will do the tricky.

Delay app start until RemoteConfig returns valus

I have successfully implemented RemoteConfig in my application. Using the fetchWithCompletionHandler: to retrieve the values from the Firebase server.
However, some of the RemoteConfig parameters are required for the app to startup and I cannot give them meaning-full default values (using setDefaults).
So my idea is to block the app startup until fetchWithCompletion handler has returned the values (I can do that asynchronously while presenting a nice spinner to the user).
However, I am wondering, will fetchWithCompletion return values immediately?
Or could it be that the user will have to wait a long time for the values to be loaded?
It will not always return values immediately -- remember, you're making a network call to fetch those values from the server. Most of the time, this call is pretty fast, but depending on your user's network at the time, it could take a long time, and I believe the default timeout for this call is pretty long -- something like 30 seconds.
If you do want to block your app from running until this call is complete, I would recommend adding a loading screen (so your users know it's not frozen in case the call does take a few seconds), and implementing your own time-out that's a little shorter than the default.
Another option you might want to consider is the "Load up values for next time" approach, where you call activateFetched immediately (which will activate any values you might have downloaded in a previous session), then start a new fetch for values you can load the next time around. There's more info about it here if you're interested. This will mean your users' first session will have to be with default values, however, and it sounds like that might not be an option with your app.

Parse open source server with mLab slow

I switched over to the open source parse with mLab on AWS and the project in objective c runs slow. When I try to update delete add or query it takes about five seconds to process it. I was not having this problem with the parse.com. If any one could help me that would be great!
You are likely on a trial version of the server, which means that after 30 minutes of inactivity, your server will go to "sleep". When it is called to by a request, it must "wake up" which usually takes about 5 seconds. However, only the first query or log in after sleeping should actually be slow. The ones following should be very responsive.

Is MobileServiceSQLiteStore.DefineTable<T> necessary on every run and if so why?

I'm trying to improve app launch performance for subsequent logins (every login after the first) with my mobile app and after putting some stop watch diagnostics I can see that defining my 8 tables with MobileServiceSQLiteStore.DefineTable<T> takes on average 2.5 seconds. Every time.
On an iPhone 4 running iOS 7 the loading time would be less than a second if it weren't for having to define these tables every time. I would expect them to only need to be defined the first run of the app when the SQLite database is setup. I've tried removing the definitions on subsequent logins and try to just get the sync tables but it fails with "Table is not defined".
So, it seems this is the intended behavior. Can you explain why they need to be defined each time and/or if there is any workaround for this? It could be negligible considering my phone is pretty old now.. but it still is something I would like to remove if possible.
Yes, it is required to be called every time because SDK uses it to know how to deserialize data if you read it via untyped interface i.e. IMobileServiceSyncTable instead of IMobileServiceSyncTable<T>.
As of now there is no work around to avoid calling it each time. I'm surprised however that it is taking 2.5 seconds for you because DefineTable does not do any database operations. It merely inspects the members on your type/JObject and maintains an in memory dictionary for later re-use.
I would recommend you to download and compile the SDK and debug your way through to figure out where the time is actually spent.

Should I convert my action method to async action method?

I have a web site where user can upload a PDF and convert it to WORD doc.
It works nice but sometimes (5-6 times per hour) the users have to wait more than usual for the conversion to take place....
I use ASP.NET MVC and the flow is:
- USER uploads file -> get the stream and convert it to word -> save word file as a temp file -> return the user the url
I am not sure if I have to convert this flow to asynchronous? Basically, my flow is sequential now BUT I have about 3-5 requests per second and CPU is dual core and 4 GB Ram.
And as I know maxConcurrentRequestsPerCPU is 5000; also The default value of Threads Per Processor Limit is 25; so these default settings should be more than fine, right?
Then why still my web app has "waitings" some times? Are there any IIS settings I need to modify from default to anything else or I should just go and make my sync method for conversion to be async?
Ps: The conversion itself is taking between 1 seconds to 40-50 seconds depending on the pdf file size.
UPDATE: Basically what it's not very clear for me is: if a user uploads a file and the conversion is long shouldn't only current request "suffer" because of this? Because the next request is independent, make another CPU call and different thread so should be no wait here, isn't it?
There are a couple of things that must be defined clearly here. Async(hronous) method and flow are not the same thing at least as far as I can understand.
An asynchronous method (using Task, usually also leveraging the async/await keywords) will work in the following way:
The execution starts on thread t1 until it reaches an await
The (potentially) long operation will not take place on thread t1 - sometimes not even on an app thread at all, leveraging IOCP (I/O completion ports).
Thread t1 is free and released back to the thread pool and is ready to service other requests if needed
When the (potentially) long operation returns a thread is taken from the thread pool (could even be the same t1 or, most probably, another one) and the rest of the code execution resumes from the last await encountered
The rest of the code executes
There's a couple of things to note here:
a. The client is blocked during the whole process. The eventual switching of threads and so on happens only on the server
b. This approach is mainly designed to alleviate an unwanted condition called 'thread starvation'. It is not meant to speed up the total client waiting time and it usually doesn't speed up the process.
As far as I understand an asynchronous flow would mean, at least in this case, that after the user's request of converting the document, the client (i.e. the client's browser) would quickly receive a response in which (s)he is informed that this potentially long process has started on the server, the user should be patient and this current response page might provide progress feedback.
In your case I recommend the second approach because the first approach would not help at all.
Of course this will not be easy. You need to emulate a queue, you need to have a processing agent and an eviction policy (most probably enforce by the same agent if you don't want a second agent).
This would work along the following lines:
a. The end user submits a file, the web server receives it
b. The web server places it in the queue and receives a job number
c. The web server returns the user a response with the job number (let's say an HTML page with a polling mechanism that would periodically receive progress from the server)
d. The agent would start processing the document when it gets the chance (i.e. finishes other work) and update its status in a common place for the web server to pick this information
e. The web server would receive calls from the HTML response asking for the status of the job and would find out that the job is complete and offer a download link or start downloading it directly.
This can be refined in some ways:
instead of the client polling the server, websockets or long polling (for example SignalR covers both) could be used
many processing agents could be used instead of one if the hardware configuration makes sense
The queue can be implemented with a simple RDBMS, Remus Rușanu has a nice article about this.

Resources