I am using Amazon S3 in conjunction with Amazon CloudFront, basically in my app I have a method to update an S3 object, basically I get the S3 object using CloudFront, I make a change to the data and I reupload it under the same key -- basically replacing/updating the file/object.
However, CloudFront doesn't seem to update along with S3 (well it does, but my users don't have all day), is there a way to force a CloudFront content update? Apparently you can invalidate it, is there an iOS SDK way to do that?
I don't know that there is a way to make an CloudFront invalidation request via the iOS SDK. You would likely need to build your own method to formulate the request against AWS API.
I would however suggest that you take another approach. Invalidation requests are expensive operations (relative to other Cloudfront costs). You probably do not want to leave it up to your user to be able to initiate an unlimited amount of invalidation requests against CloudFront via the application. You will also run up against limits to the number of concurrent invalidation requests you can have. Your best best is to actually implement a file name versioning scheme to where you can change the file name in a programmatic way for each revision. You would then reference the new URL in Cloudfront with each revision, eliminating the need to wait for a cache refresh or perform an invalidation. Also this will lead to more immediate response availability for the image, as invalidation requests may take a while to process.
Please note the following from the CloudFront FAQ:
Q. Is there a limit to the number of invalidation requests I can make?
There are no limits on the total number of files you can invalidate; however, each invalidation request you make can have a maximum of 1,000 files. In addition, you can only have 3 invalidation requests in progress at any given time. If you exceed this limit, further invalidation requests will receive an error response until one of the earlier requests completes. You should use invalidation only in unexpected circumstances; if you know beforehand that your files will need to be removed from cache frequently, it is recommended that you either implement a versioning system for your files and/or set a short expiration period.
Related
We set timeout interval for a request by NSMutableURLRequest timeoutInterval. As Apple's document described, it specifies the limit between packets, not the whole request. When we analyse our requests logs, some timeout request exceeded the seconds we set to timeoutInterval. We need timeout the requests accurately.
By reading document and blogs, the timeoutIntervalForRequest property in NSURLSessionConfiguration is the same as timeoutInterval. But the timeoutIntervalForResource property seems fit our requirement.
However, Mattt says in objc.io that timeoutIntervalForResource "should only really be used for background transfers". Can it be used in normal request? Such as query user info. Is it appropriate in this situation?
Thanks very much.
It can be used, but it rarely makes sense to do so.
The expected user experience from an iOS app is that when the user asks to download or view some web-based resource, the fetch should continue, retrying/resuming as needed, until the user explicitly cancels it.
That said, if you're talking about fetching something that isn't requested by the user, or if you are fetching additional optional data that you can live without, adding a resource timeout is probably fine. I'm not sure why you would bother to cancel it, though. After all, if you've already spent the network bandwidth to download half of the data, it probably makes sense to let it finish, or else that time is wasted.
Instead, it is usually better to time out any UI that is blocked by the fetch, if applicable, but continue fetching the request and cache it. That way, the next time the user does something that would cause a similar fetch to occur, you already have the data.
The only exception I can think of would be fetching video fragments or something similar, where if it takes too long, you need to abort the transfer and switch to a different, lower-quality stream. But in most apps, that should be handled by the HLS support in iOS itself, so you don't have to manage that.
Reading Workbox strategies documentation, and I found I can't imagine the situation using "cache-first" strategies in Workbox.
There is "Stale-While-Revalidate" strategies which uses cache first, and in background, updates cache file through Network. If you change the target file, it is useful because when access next time, the App uses latest file which last time cached. If you don't have any changes, there is no disadvantage, I think.
What is the main purpose using "cache-first" strategies in Workbox?
Thanks in advance.
(This answer isn't specific to Workbox, though Workbox makes it easier to use these strategies, instead of "rolling them by hand.")
I'd recommend using a cache-first strategy when you're sure that the content at a given URL won't change. If you're confident of that, then the extra network request made in the stale-while-revalidate strategy is just a waste—why bother with that overhead?
The scenarios in which you should have the highest confidence that the content at a URL won't change is if the URL contains some explicit versioning information (e.g. https://example.com/librbaries/v1.0.0/index.js) or a hash of the underlying contents (e.g. https://example.com/libraries/index.abcd1234.js).
It can sometimes make sense to use a cache-first strategy when a resource might update, but your users are not likely to "care" about the update, and the cost of retrieving that the update is high. For example, you could argue that using a cache-first strategy for the images used in the body of an article is a reasonable tradeoff, even if the image URLs aren't explicitly versioned. Seeing an out of date image might not be the worst thing, and you would not force your users to download a potentially large image during the revalidation step.
I'd like my iOS application (at least certain endpoints) to have the following network behavior:
Always use the cache, whenever it's available, no matter the age (draw the UI right away)
If the data is stale, also make a network request (the UI has stale data during this period, but it's still probably pretty close)
If network data returns, update the cache and make any UI updates that are required.
I prefer a behavior like this because I can then set my caching policy very aggressively (long cache times). For data that updates infrequently, this results in rapid UI returns in the common case and a model layer that is kept up to date essentially in the background (from the user's perspective)
I'm reading about NSURLCache, but I don't see a cache policy, or even a combination of two policies that I'm confident in.
Options:
Use ReturnCacheDataDontLoad to always get cache. If failure or old cache use ReloadIgnoringLocalCacheData for the HTTP fetch. (have to check myself? age is inspectable?)
Use ReturnCacheDataDontLoad to always get cache. Then use UseProtocolCachePolicy with the cache time set to very low and ignore the response if it returns from cache (can I tell if it returns from cache? this says not reliably)
Separate the two concerns. Use ReturnCacheDataDontLoad for all user-initiated requests, only firing a network request right away if there is no cache at all. Separately, have a worker that keeps an eye on stored models, updating them in the background whenever they appear old.
Extend NSURLCache
Use something OTS that already does this? (-AFNetworking just uses NSURLSession caching. +EVURLCache forces disk caching but expects the data to be seeded on app install.
I mentioned Amazon CDN and iOS devices because I am not sure which part is the culprit.
I host jpg and PDF files in Amazon CDN.
I have an iOS application that download a large number of jpg and PDF files in a queue. I have tried using dataWithContentOfURL and ASIHttpRequest, but I get the same result. ASIHttpRequest at least gives a callback to indicate that there is problem with the download, so I can force it to retry.
But this happens very often. Out of 100 files, usually 1-5 files have to be redownloaded. If I check the file size, it is smaller than the original file size and can't be opened.
The corrupted files are usually different everytime.
I've tried this on different ISP and network. It's the same.
Is there a configuration that I missed out in Amazon CDN, or is there something else I missed out in iOS doWnload? Is it not recommended to queue large number of files for download?
I wouldn't download more than 3 or 4 items at once on an iPhone. Regardless of implementation limitations (ASIHTTPRequest is decent anyway) or the potential for disk thrashing, you have to code for the 3G use case unless your app is explicitly marked (as in, with an Info.plist setting) that it requires Wi-Fi.
A solution exists within ASIHTTPRequest itself to make these operations sequential rather than concurrent. Add your ASIHTTPRequest objects to an ASINetworkQueue (the one returned by [ASINetworkQueue queue] will do fine). They will be executed one after the other.
Note that if you encounter any errors, all the requests on the queue will by default be cancelled unless you set the queue's shouldCancelAllRequestsOnFailure to NO.
EDIT: I just noticed you already mentioned using a download queue. I would therefore suspect it's more of an issue at the other end than on your device. Connections could be dropping for various reasons: keep-alive setting is too short, resources too low on the server so it's timing out, some part of the physical link between the server and the Internet backbone is failing intermittently. To be sure, though, you probably need to test your app on multiple devices to make sure it's consistently failing across all of them to really be able to say that.
You could possibly try reducing the number of concurrent downloads:
ASIHTTPRequest.sharedQueue.maxConcurrentOperationCount = 2;
This changes the default ASIHTTPRequest queue - if you're using your own queue set the value on that instead.
The default is 4, which is above the limit recommended by the HTTP 1.1 RFC when using persistent connections and when all the content is on the same server.
Let's imagine app which is not just another way to post tweets, but something like aggregator and need to store/have access to tweets posted throught.
Since twitter added a limit for API calls, app should/may use some cache, then it should periodically check if tweet was not deleted etc.
How do you manage limits? How do you think good trafficed apps live while not whitelistted?
To name a few.
Aggressive caching. Don't call out to the API unless you have to.
I generally pull down as much data as I can upfront and store it somewhere. Then I operate off the local store until it runs out and needs to be refreshed.
Avoid doing things in real time. Queue up requests and make them on a timer.
If you're on Linux, cronjobs are the easiest way to do this.
Combine requests as much as possible.
Well you have 100 requests per hour, so the question is how do you balance it between the various types of requests. I think the best option is the way is how TweetDeck which allows you to set the percentage and saves the rest of the % for posting (because that is important too):
(source: livefilestore.com)
Around the caching a database would be good, and I would ignore deleted ones - once you have downloaded the tweet it doesn't matter if it was deleted. If you wanted to, you could in theory just try to open the page with the tweet and if you get a 404 then it's been deleted. That means no cost against the API.