My Azure Durable Function(Runtime V3) getting an average of 3M events per day. When it runs for two or three weeks it is getting slower and slower. When I remove two table storages(History & Instances) used by Durable Function Framework, it is getting better and works as expected. I hosted my function app in the consumption plan. And also inside my function app, I'm used Durabel Entities as well. In my code, I'm using sub orchestrators as well for the Fan-Out mechanism.
Is this problem possible when it comes to heavy workload? Do I need to clear those table storages from time to time or do I need to Delete the state of completed entities inside my Durable Entity Function?
Someone, please help me
Yes, you should perform periodic clean-ups yourself by calling the PurgeInstanceHistoryAsync method. See a similar post on how to do this: https://stackoverflow.com/a/60894392
Also review any loops or Monitor patterns that you may have in your code.
Any looping logic, (like foreach, for or while loops) will replay from the initial startup state. Whilst the Durable Function replay architecture is very efficient at doing this, the code we write may not be optimised for repetitive iterations.
Durable Monitor Pattern is almost an Anti-Pattern. The concept is OK but it is easily misinterpreted and is open to abuse. It is designed for a low-frequency loop that polls an endpoint either for a set number of iterations or up until a finite time, or of course when the state of the endpoint being monitoried has changed. That state change will be the trigger to perform the rest of the operation.
It is NOT an example of how to use general or high frequency looping structures in Durable functions
It is NOT and example of how to implement a traditional HTTP endpoint reporting monitor in an infinite loop (while(true)) style, perhaps to record changes into a data store over time.
If your durable function logic has an iterator that may involve many iterations, consider migrating the iteration step to a sub-orchestration that uses the Eternal Orchestration pattern
I am currently building an app that will run on parse server on back4app. I wanted to know if there are any tips for lowering requests. I feel like my current app setup is not taking advantage of any methods to lower requests.
For example: when i call a cloud code function is that one request even if the cloud code function has multiple queries in it? Can I use cloud code to lower requests some how?
another example : If I use parse local data store rather than constantly getting data from server can that lower requests or does it not really because you would still need to update changes later on. Or do all the changes get sent at once and count as one request.
Sorry I am very new to looking at how requests and back end pricing is measured in general. I want to make sure I can be as efficient as possible in order to get my app out without going over budget.
Take a look in this link here:
http://docs.parseplatform.org/ios/guide/#performance
Most part of the tips there are useful both for performance and number of requests.
About your questions:
1) Cloud code - each call to a cloud code function counts as a single request, no matter how many queries you do
2) Client side cache - for sure it will reduce the total amount of requests you do in the server
This question already has answers here:
How do I handle long requests for a Rails App so other users are not delayed too much?
(3 answers)
Closed 6 years ago.
I have an application, which does a lot of computation on few pages(requests). The web interface sends an AJAX request. The computation takes sometimes about 2-5 minutes. The problem is, by this time AJAX request times out.
We can certainly increase the timeout on the web portal, but that doesn't sound like right solution. Also, to improve performance:
Removed N+1/Duplicate queries
Implemented Caching
What else could be done here to reduce the calculation time?
Also, if it still takes longer, I was thinking of following solutions:
Do the computation beforehand and store it in DB. So when the actual request comes, there is no need of calculation. (Apprehensive about this approach. Since we will have to modify/Erase-and-recalculate this data, whenever there is some application logic change.)
Load the whole data in cache when application starts/data gets modified. But for the first time computation has to be done. Also, can't keep whole data in the cache when the application starts. So need to store it in the cache as per demand.
Maybe, do something like Angular promise, where promise gets fulfilled when the response comes from the server.
Do we have any alternative to do this efficiently?
UPDATE:
Depending on user input, the calculation might happen in few seconds. And also it might take 2-5 minutes. The scenario is, user imports an excel. The excel has been parsed and saved in DB. Now on another page, user wants to see the report/analytics graph derived with few calculations on the imported data(which has already been saved to db with background job). The calculation has to be done with many factors, so do not want to save it in DB(As pointed above). Also, when user request the report/analytics graph, It'll be bad experience to tell him that graph will be shown after sometime. You'll get email/notification etc.
The extremely typical solution is to enqueue a job for background processing, and return a job ID to the front-end. Your front-end can then poll for completion using that job ID, or you can trigger a notification such as an email to be sent to the user when the job completes.
There are a multitude of gems for this, and it is such a popular and accepted solution that Rails introduced its own ActiveJob for this exact purpose.
Here are a few possible solutions:
Optimize your tables with indexes to reduce data fetching time.
Preload all rows you'll be dealing with at the beginning, so you won't do a query each time you calculate something... it's faster/easier to #things.select { |r| r.blah } than to Thing.where(conditions)
Instead of all that, just do the computing in PLSQL on the database side. Sure, it's not the same as writing Ruby code but it could be faster.
And yes, cache the whole results set into memcache or redis or something (and expire when something change)
Run the calculation in the background (crontab?) and store the results in a JSON somewhere, or cache the entire HTML file (if you're not localizing or anything)
PS: I'm doing 1,2,3 combined with 5 (caching JSON results into memcache and then pulling the array and formatting/localizing) for a few M records from about 12 tables... sports data mainly.
In my app I want to retrieve a large amount of data from Parse to build a view of statistics. However, in future, as data builds up, this may be a huge amount.
For example, 10,000 results. Even if I fetched in batches of 1000 at a time, this would result in 10 fetches. This could rapidly, send me over the 30 requests per second limitation by Parse. Specifically when several other chunks of data may need to be collected at the same time for other stats.
Any recommendations/tips/advice for this scenario?
You will also run into limits with the skip and limit query variables. And heavy weight lifting on a mobile device could also present issues for you.
If you can you should pre-aggregate these statistics, perhaps once per day, so that you can simply directly request the details.
Alternatively, create a cloud code function to do some processing for you and return the results. Again, you may well run into limits here, so a cloud job may meed your needs better, and then you may need to effectively create a request object which is processed by the job and then poll for completion or send out push notifications on completion.
I have written many iOS apps that was communicating with the backend. Almost every time, I used HTTP cache to cache queries and parse the response data (JSON) into objective-C objects. For this new project, I'm wondering if a Core Data approach would make sense.
Here's what I thought:
The iOS client makes request to the server and parse the objects from JSON to CoreData models.
Every time I need a new object, instead of fetching the server directly, I parse CoreData to see if I already made that request. If that object exists and hasn't expired, I use the fetched object.
However, if the object doesn't exist or has expired (Some caching logic would be applied here), I would fetch the object from the server and update CoreData accordingly.
I think having such an architecture could help with the following:
1. Avoid unnecessary queries to the backend
2. Allow a full support for offline browsing (You can still make relational queries with DataCore's RDBMS)
Now here's my question to SO Gods:
I know this kinda requires to code the backend logic a second time (Server + CoreData) but is this overkill?
Any limitation that I have under estimated?
Any other idea?
First of all, If you're a registered iOS Dev, you should have access to the WWDC 2010 Sessions. One of those sessions covered a bit of what you're talking about: "Session 117, Building a Server-driven User Experience". You should be able to find it on iTunes.
A smart combination of REST / JSON / Core Data works like a charm and is a huge time-saver if you plan to reuse your code, but will require knowledge about HTTP (and knowledge about Core Data, if you want your apps to perform well and safe).
So the key is to understand REST and Core Data.
Understanding REST means Understanding HTTP Methods (GET, POST, PUT, DELETE, ...HEAD ?) and Response-Codes (2xx, 3xx, 4xx, 5xx) and Headers (Last-Modified, If-Modified-Since, Etag, ...)
Understanding Core Data means knowing how to design your Model, setting up relations, handling time-consuming operations (deletes, inserts, updates), and how to make things happen in the background so your UI keeps responsive. And of course how to query locally on sqlite (eg. for prefetching id's so you can update objects instead of create new ones once you get their server-side equivalents).
If you plan to implement a reusable API for the tasks you mentioned, you should make sure you understand REST and Core Data, because that's where you will probably do the most coding. (Existing API's - ASIHttpRequest for the network layer (or any other) and any good JSON lib (eg. SBJSON) for parsing will do the job.
The key to make such an API simple is to have your server provide a RESTful Service, and your Entities holding the required attributes (dateCreated, dateLastModified, etc.) so you can create Requests (easily done with ASIHttpRequest, be they GET, PUT, POST, DELETE) and add the appropriate Http-Headers, e.g. for a Conditional GET: If-Modified-Since.
If you already feel comfortable with Core Data and can handle JSON and can easily do HTTP Request and handle Responses (again, ASIHttpRequest helps a lot here, but there are others, or you can stick to the lower-level Apple NS-Classes and do it yourself), then all you need is to set the correct HTTP Headers for your Requests, and handle the Http-Response-Codes appropriately (assuming your Server is REST-ful).
If your primary goal is to avoid to re-update a Core-Data entity from a server-side equivalent, just make sure you have a "last-modified" attribute in your entity, and do a conditional GET to the server (setting the "If-Modified-Since" Http-Header to your entities "last-modified" date. The server will respond with Status-Code 304 (Not-Modified) if that resource didn't change (assuming the server is REST-ful). If it changed, the server will set the "Last-Modified" Http-Header to the date the last change was made, will respond with Status-Code 200 and deliver the resource in the body (eg. in JSON format).
So, as always, the answer is to your question is as always probably 'it depends'.
It mostly depends what you'd like to put in your reusable do-it-all core-data/rest layer.
To tell you numbers: It took me 6 months (in my spare time, at a pace of 3-10 hours per week) to have mine where I wanted it to be, and honestly I'm still refactoring, renaming, to let it handle special use-cases (cancellation of requests, roll-backs etc) and provide fine-grained call-backs (reachability, network-layer, serialization, core data saving...), . But it's pretty clean and elaborate and optimized and hopefully fits my employer's general needs (an online market-place for classifieds with multiple iOS apps). That time included doing learning, testing, optimizing, debugging and constantly changing my API (First adding functionality, then improving it, then radically simplifying it, and debugging it again).
If time-to-market is your priority, you're better off with a simple and pragmatic approach: Nevermind reusability, just keep the learnings in mind, and refactor in the next project, reusing and fixing code here and there. In the end, the sum of all experiences might materialize in a clear vision of HOW your API works and WHAT it provides. If you're not there yet, keep your hands of trying to make it part of project budget, and just try to reuse as much of stable 3'rd-Party API's out there.
Sorry for the lenghty response, I felt you were stepping into something like building a generic API or even framework. Those things take time, knowledge, housekeeping and long-term commitment, and most of the time, they are a waste of time, because you never finish them.
If you just want to handle specific caching scenarios to allow offline usage of your app and minimize network traffic, then you can of course just implement those features. Just set if-modified-since headers in your request, inspect last-modified headers or etags, and keep that info persistent in your persistet entities so you can resubmit this info in later requests. Of course I'd also recommend caching (persistently) resources such as images locally, using the same HTTP headers.
If you have the luxury of modifying (in a REST-ful manner) the server-side service, then you're fine, provided you implement it well (from experience, you can save as much as 3/4 of network/parsing code iOS-side if the service behaves well (returns appropriate HTTP status codes, avoids checks for nil, number transformations from strings, dates, provide lookup-id's instead of implicit strings etc...).
If you don't have that luxury, then either that service is at least REST-ful (which helps a lot), or you'll have to fix things client-side (which is a pain, often).
There is a solution out there that I couldn't try because I'm too far in my project to refactor the server caching aspect of my app but it should be useful for people out there that are still looking for an answer:
http://restkit.org/
It does exactly what I did but it's much more abstracted that what I did. Very insightful stuff there. I hope it helps somebody!
I think it's a valid approach. I've done this a number of times. The tricky part is when you need to deal with synchronizing: if client and server can both change things at the same time. You almost always need app-specific merging logic for this.