MVC4-WCF - async loading of data from 3rd party - asp.net-mvc

I'm setting up a server-client solution with ASP.NET MVC4 and a WCF-service, and was thinking you might have some input to a couple of questions.
The WCF-service get its data from a 3rd-party service which is quite slow. So my plan is the following scenario:
User logs in, setting off a jQuery-ajax-request to the MVC-controller
The Controller requests the WCF-service for data
The Service retrieves a small amount of data from the 3rd-party and before it returns it...
HERE IT COMES: the service spawns a background-thread to download a large amount of data from the 3rd-party
The service returns the small amount of data
The client gets the small amount of data and displays it, but also starts polling the service for the large amount of data
The large amount of data is downloaded to the WCF-service, and put into a cache-database
The Service returns the large amount of data to the client upon next polling-request.
My questions:
Am I not thinking straight about this?
What kind of background-threading mechanism should I use? The WCF-service is hosted in IIS.
Is polling from the client the right way to retrieve the next chunk of data?
Thanks for your time!

Related

Lowering total requests per month on parse server Swift

I am currently building an app that will run on parse server on back4app. I wanted to know if there are any tips for lowering requests. I feel like my current app setup is not taking advantage of any methods to lower requests.
For example: when i call a cloud code function is that one request even if the cloud code function has multiple queries in it? Can I use cloud code to lower requests some how?
another example : If I use parse local data store rather than constantly getting data from server can that lower requests or does it not really because you would still need to update changes later on. Or do all the changes get sent at once and count as one request.
Sorry I am very new to looking at how requests and back end pricing is measured in general. I want to make sure I can be as efficient as possible in order to get my app out without going over budget.
Take a look in this link here:
http://docs.parseplatform.org/ios/guide/#performance
Most part of the tips there are useful both for performance and number of requests.
About your questions:
1) Cloud code - each call to a cloud code function counts as a single request, no matter how many queries you do
2) Client side cache - for sure it will reduce the total amount of requests you do in the server

How to sync images from local to web server? So I can see them across the device

I have successfully implemented Sync Logic to sync my local data (text related data) with web server (was referrign this link) and that is working.
but I have to sync images too. here is the way I want to do that,
user will create a product (title, 5 images (1 compulsory, 4 optional), description).
Now, I can follow the same scenario like one I did above but getting confusion in image uploading, as it would be not feasible I can upload it all the same time or what if data saved on server and internet goes off hence images not uploaded, there might change other user won't see the images.
Can anyone have the best approcah? please guide me.
Thanks,
The challenge here is that you are dealing with a long transfer time, due to a large chunk of data, am I right? If that's the case, then I would recommend the protocol below. I would also use the same protocol for the small data syncs as well, since there is also a risk that the sync is interrupted.
Set a flag on the local device that a sync is in progress.
Upload to a temporary location on the server.
Once the upload is complete, call a web service on the server, to notify the server that the transfer has completed. A web service is preferable, since that will allow you to receive a response once the web service is completed. The web service moves the files to their final destination.
When you receive the response, you can update the flag set in step 1, and you will know that the files are where they should be on the server.

Can I download and parse a large file with RestKit?

I have to download thousands or millions of hotposts from a web service and store them locally in core data. The json response or file is about 20 or 30 MB, so download will take time. I guess mapping and store it in core data will also take time time.
Can I do it in restkit? or has been designed just for reasonable size responses?
I see I can track progress when downloading a large file, even I see I can know when mapping starts or finishes: http://restkit.org/api/latest/Protocols/RKMapperOperationDelegate.html
Probably I can also encapsulate the core data operation to avoid blocking the UI.
What do you think? Do you think is this feasible? Or should I select a more manual approach? I would like to know your opinion. Thanks in advance.
Your problem is not encapsulation or threading, it's memory usage.
For a start, thousands or millions of 'hot posts' are likely to cause you issues on a mobile device. You should usually be using a web service that allows you to obtain a filtered set of content. If you don't have that already, consider creating it (possibly by uploading the data to a service like Parse.com).
RestKit isn't designed to use a streaming parser, so the full JSON will need to be deserialised into memory before it can be processed. You can try it, but I suspect the mobile device will be unhappy if the JSON is 20 / 30 MB.
So, create a nice web service or use a streaming parser and process the results yourself (which, could technically be done using RestKit mapping operations).

How can I retrieve data onscreen from an external API without tying up a request?

I have a Rails 3 application that lets a user perform a search against a 3rd party database via an API. It could potentially bring back quite a bit of XML data. Also, the API could be busy serving requests for other users and have a nontrivial delay in its response time.
I only run 2 webservers so I can't afford to have a delayed request obviously. I use Sidekiq to process long-running jobs, but in all the cases I've needed that I haven't had to return a value to the screen.
I also use Pusher to communicate back to the user when a background job is finished. I am checking it out, but I don't know if it can be used for the kind of data I want to push to the screen. Right now it just pops up dialog boxes with messages that I send it.
I have thought of some pretty kooky stuff, like running the request via Sidekiq, sending the results to a session object or file, then using Pusher to kick off some kind of event to grab the data and populate the screen with it. Seems kind of Rube Goldberg-ish.
I appreciate any help or insight anyone can offer into the problem!
I had a similar situation not long ago and the way I've fixed was using memcache and threads.
I've also thought about using Sidekiq, but Sidekiq is ideal if you don't expect to use the data right away, so memcache and threads worked pretty well and gave us a good amount of control.
Instead of calling the API directly I would assign the API request to a thread and this thread once done would write to memcache, in my case this can happen incrementally, with the same API being able to return more data from the same endpoint until is complete.
From the UI I would have a basic ajax pooling mechanism that would hit a controller and check memcache for data and for the status to see if it was complete or not, this would sign the UI that it need to keep pooling for more data.

Is using a Web API as dataprovider for a website efficient?

I was thinking about setting up a project with Web API. Basically build the API first and program the web site using this API.
Although it's sound promising I was wondering:
If I separate the logic in a nice way, I might end up retrieving data on a web-page through multiple API call's, which in turn are multiple connections with the server with all the overhead etc..
For example, if I use, let's say 8 different API call's on one page, I can't imagine it won't have an impact on the web-page's performance.
So, have I misunderstood something? Or is this kind of overhead negligible - or does the need for multiple call's indicates that the design is wrong?
Thanks in advance.
Well, we did it. Web API server providing the REST access to all the data. Independent UI Clients consuming it as the only access-point to underlying peristence.
The first request takes some time. It is significantly longer. It must init all the UI Client stuff, and get the least needed data from a server. (Menu, user, access rights, metadata...list-view data)
The point, the real advantage, is hidden in the second, the third... request. Lot of stuff is already there on a UI Client. And even, if this is requested again, caching (Server, Client, both) could be introduced.
So, this would mean more requests (at least during the UI Client start up)... but it does not imply ... slower application.
The maintenance benefit is hidden (maybe it is not hidden, it should be obvious) in the Separation of Concern. On the server, we are no longer solving the issue, where to place the user data handling, the base-controller or child-controller... should there by the Master-page, the Layout-controller...
Solved. We are taking care about single, specific stuff, published via REST. One method, one business operation. And that's the dream if we'd like to keep that application alive and be the repairman and extender.
One aspect is that you can display the page to the end user very very fast . Once the page is loaded, use Jquery async calls and any Javscript template tool (like angularjs or mustacheJs) to call the web api simultaneously to build the client page views.
I have used this approach in multiple project and experience of the user is tremendous.
Most modern browsers support 6-8 parallel connections to the same site. So you do have to be careful about that. Unless you are connecting to that many separate systems, I would try to reduce the number of connections. Or ensure the calls are called asynchronously by different events to reduce the chance of parallel connections.
Making a series of HTTP calls to obtain data for your page will have an overhead. Only testing will tell you how that might impact in your scenario.
There is little point using Web API just because you can. You should have a legitimate reason for building a RESTful API. Even then, if it is primarily for your own consumption, design it to deliver a ViewModel for each page in one call.

Resources