SDWebImage not caching - ios

I'm downloading images using [SDWebImageDownloader.sharedDownloader downloadImageWithURL] with options set to 0. I'm initially not doing anything with them, with the understanding that they will be cached. However, when I use the exact same function to later display an image, the function is downloading the image again, rather than getting it from the cache (the image cache type is 0). In both cases, the url of the image is the same. Is my understanding regarding caching incorrect?

The easiest way to enjoy cache functionality is to use SDWebImageManager instead of SDWebImageDownload. SDWebImageManager provides the SDImageCache functionality, whereas if you use SDWebImageDownload, you'll have to rely upon NSURLCache (which has limitations/issues) or write your own cache code.
Also (and implicit in Gustavo's question), if you're just trying to set the image of a UIImageView, it's actually even better to not use either of those classes, and use the UIImageView+WebCache category instead. It enjoys all of the cache abilities of SDWebImageManager, but also offers other advantages (esp for re-used UITableViewCell and UICollectionViewCell objects).
In a comment to another user, you say that you're downloading all of the images in advanced, "just to get them cached, so that when the user actually does want to see an image he doesn't have to wait."
That is a great stretch objective, but this sort of prefetch (sometimes call eager loading, in contrast to the more common lazy loading) has a couple of implications:
Unless you're confident that the user really will need all of the images, this is an aggressive use of their mobile device's cellular data plan, so maybe you should only do this if on WiFi (which can be determined by Reachability). Apple has even rejected apps for using too much cellular bandwidth.
The app will be more aggressive than necessary in terms of memory (causing more suspended apps to be terminated, which doesn't affect the UX for your app, but Apple asks us all to be good citizens and not use more RAM than we need). Again, if the user was going to need all of the images, then it's a fine thing to do, but if not, one should really minimize memory consumption, not loading the cache up with stuff that might not be needed for the current session. Also note that downloading a bunch of stuff that might need to be downloaded, but was done simply as a precaution has (modest) battery implications, too.
If you do a lot of requests for background data, make sure you're not using up all of the limited the network connections (you only have five) and backlogging the system with a lot of requests. The nice thing is that the UI UIImageView category naturally favors the current UI (being, fundamentally, a lazy-loading mechanism). But let's say there are 100 images, and the user fires the app and scrolls down to the bottom of the list. Do you really want the request for #90 (which is on screen and the user is waiting for) to wait for #1-89 to finish?
See WWDC 2012 video Asynchronous Design Patterns with Blocks, GCD, and XPC, section 7, "Separate control and data flow", about 48 min into the video for a discussion of how this is problematic.
If nothing else, I'd make sure that you test the app using the network link conditioner (part of the hardware IO tool for MacOS or under the Settings > General > Developer on the device). So turn on the network link conditioner, remove and reinstall the app (to empty the persistent storage cache), and then fire up the app with this slow connection, try navigating around while the image loading is in progress. A simple "let's kick off a prefetch of everything" may not offer the necessary prioritization of the current UI on a slow network that you really want.
All of this said, you may have thought through all of these implications, and if so, I apologize for belaboring the obvious. It's just that one has to be careful before implementing an aggressive pre-fetch of all images.

Related

Download multiple files with operation queue not stable in background mode

Currently what I want to achieve is download files from an array that download only one file at a time and it still performs download even the app goes to the background state.
I'm using Rob code as stated in here but he's using URLSessionConfiguration.default which I want to use URLSessionConfiguration.background(withIdentifier: "uniqueID") instead.
It did work in the first try but after It goes to background everything became chaos. operation starts to download more than one file at a time and not in order anymore.
Is there any solution to this or what should I use instead to achieve what I want. If in android we have service to handle that easily.
The whole idea of wrapping requests in operation is only applicable if the app is active/running. It’s great for things like constraining the degree of concurrency for foreground requests, managing dependencies, etc.
For background session that continues to proceed after the app has been suspended, though, none of that is relevant. You create your request, hand it to the background session to manage, and monitor the delegate methods called for your background session. No operations needed/desired. Remember, these requests will be handled by the background session daemon even if your app is suspended (or if it terminated in the course of its normal lifecycle, though not if you force quit it). So the whole idea of operations, operation queues, etc., just doesn’t make sense if the background URLSession daemon is handling the requests and your app isn’t active.
See https://stackoverflow.com/a/44140059/1271826 for example of background session.
By the way, true background sessions are really useful when download very large resources that might take a very long time. But it introduces all sorts of complexities (e.g., you often want to debug and diagnose when not connected to the Xcode debugger which changes your app lifecycle, so you have to resort to mechanisms like unified messaging; you need to figure out how to restore UI if the app was terminated between the time the requests were initiated and when they finished; etc.).
Because of this complexity, you might want to consider whether this is absolutely needed. Sometimes, if you only need less than 30 seconds to complete some requests, it’s easier to just ask the OS to keep your app running in the background for a little bit after the user leaves the app and just use standard URLSession. For more information, see Extending Your App's Background Execution Time. It’s a much easier solution, bypassing many background URLSession hassles. But it only works if you only need 30 seconds or less. For larger requests that might exceed this small window, a true background URLSession is needed.
Below, you asked:
There are some downside with [downloading multiple files in parallel] as I understanding.
No, it’s always better to allow downloads to progress asynchronously and in parallel. It’s much faster and is more efficient. The only time you want to do requests consecutively, one after another, is where you need the parse the response of one request in order to prepare the next request. But that is not the case here.
The exception here is with the default, foreground URLSession. In that case you have to worry about latter requests timing out waiting for earlier requests. In that scenario you might bump up the timeout interval. Or we might wrap our requests in Operation subclass, allowing us to constrain not only how many concurrent requests we will allow, but not start subsequent requests until earlier ones finish. But even in that case, we don’t usually do it serially, but rather use a maxConcurrentOperationCount of 4 or something like that.
But for background sessions, requests don’t time out just because the background daemon hasn’t gotten around to them yet. Just add your requests to the background URLSession and let the OS handle this for you. You definitely don’t want to download images one at a time, with the background daemon relaunching your app in the background when one download is done so you can initiate the next one. That would be very inefficient (both in terms of the user’s battery as well as speed).
You need to loop inside an array of files and then add to the session to make it download but It will be download asynchronously so it's hard to keeping track also since the files are a lot.
Sure, you can’t do a naive “add to the end of array” if the requests are running in parallel, because you’re not guaranteed the order that they will complete. But it’s not hard to capture these responses as they come in. Just use a dictionary for example, perhaps keyed by the URL of the original request. Then you can easily look up in that dictionary to find the response associated with a particular request URL.
It’s incredibly simple. And we now can perform requests in parallel, which is much faster and more efficient.
You go on to say:
[Downloading in parallel] could lead the battery to be high consumption with a lot of requests at the same time. that's why I tried to make it download each file one at a time.
No, you never need to perform downloads one at a time for the sake of power. If anything, downloading one at a time is slower, and will take more power.
Unrelated, if you’re downloading 800+ files, you might want to allow the user to not perform these requests when the user is in “low data mode”. In iOS 13, for example, you might set allowsExpensiveNetworkAccess and allowsConstrainedNetworkAccess.
Regardless (and especially if you are supporting older iOS versions), you might also want to consider the appropriate settings isDiscretionary and allowsCellularAccess.
Bottom line, you want to make sure that you are respectful of a user’s limited cellular data plan or if they’re on some expensive service (e.g. connecting on an airplane’s expensive data plan or tethered via some local hotspot).
For more information on these considerations, see WWDC 2019 Advances in Networking, Part 1.

Alamofire Priority Queue

I am using Alamofire as my networking library for my Swift app. Is there a way to keep a "priority queue" of network requests with Alamofire? I believe I saw this feature in a library in the past but I can no longer find it or find other posts about this.
Let's say I open a page in my application and it starts to make a few requests. First it gets some JSON, which is fast and no problem.
From that JSON, it pulls out some information and then starts downloading images. These images have the potential to be quite large and take many seconds (~30 seconds or more sometimes). But the tricky part is that the user has the option to move on to the next page before the image(s) finish downloading.
If the user moves on to the next page before the image downloading is done, is it possible to move it on to a lower priority queue? So that when the images on the next page start loading they will go faster? I would even be open to pausing the old one entirely until the new requests are finished if that is even possible.
Keep in mind I am open to many suggestions. I have a lot of freedom with my implementation. So if this is a different library, or different mechanism in iOS that is fine. Even if I continue to use Alamofire for JSON and do all my image downloading and management with something else that would be alright too.
Also, probably irrelevant but I will add it here. I'm using https://github.com/rs/SDWebImage for caching my images once they're fully downloaded. Which is why I don't want to cancel the request completely. I need it to finish and then it won't happen again.
TL;DR I want a fast queue and a slow queue with the ability to move things from the fast queue to the slow queue before they are finished.
Have you considered managing a NSOperationQueue? This tutorial might be helpful. In his example, he pauses the downloads as they scroll off the page, but I believe you could adjust the queuePriority property of the NSOperation objects instead.

NSURLConnection and multiple asynchronous requests - is it messing with the data being transmitted?

I have an NSArray of links. I want to parse through them with an online article extractor API (Clear Read), and with the result given back for each article (some HTML) I throw it into an NSString.
My problem arises from the fact that, say my array has 100 URLs in it, I loop through the array shooting each item into the API and getting back some results in JSON. This is firing like 100 NSURLConnection calls at once asynchronously.
I wasn't sure if that'd be a problem, but when I give it 100 URLs (real strings, none are nil) the data that comes back often has either empty values for the JSON keys (when they shouldn't), or the data coming back is nil. There's also a bunch of duplicates.
Should I be handling multiple asynchronous connections better than I am now? If so, how?
A couple of thoughts:
If you're doing concurrent asynchronous requests and are using asynchronous NSURLConnection, then you'll want to define your own class for this download operation to make sure that every connection keeps track of its own properties. That way, everything can be encapsulated within this class where the resulting download objects can keep track of what's downloaded, what's been parsed, etc. If you're not using asynchronous NSURLConnection (e.g. you're just using dataWithContentsOfURL), it's even easier, though you lose some of the progress updates that NSURLConnection provides and/or streaming opportunities.
For best performance, you should do concurrent requests. Having said that, you should not have more than four or five concurrent requests going to any particular server. This is an iOS imposed constraint, and especially if you have a slow network connection, you risk having connections timeout otherwise.
If you're doing preliminary testing on the simulator, you may want to make sure you try out the "network link conditioner". It's part of the "Hardware IO Tools for Xcode", available at the Downloads for Apple Developers. There are issues (such as the aforementioned timeout problems if you have too many concurrent requests going to a particular server) that only manifest themselves in slow connections.
Having said that, you also want to make sure to test your solution on a device with real world network speeds. It's easy to successfully run massively parallel tasks successfully on the simulator that are too greedy for the device. Limiting the number of concurrent sessions to five will diminish this resource problem, but it should be part of your testing strategy.
I agree with JRG-Developer, that you should look into established frameworks, such as AFNetworking. Make sure to set the maxConcurrentOperationCount for the queue of the AFHTTPClient, though, if queueing 100 plus operations.
I don't know how much data your 100 requests entail, but be forewarned that the app approval process has been known to reject apps that make extraordinary networks requests on cellular networks. What constitutes excessive cellular network activity is not explicitly stated in the app review guidelines, though Avoiding iPhone App Rejection From Apple has claimed that you should ensure that you don't exceed more than 4.5mb in 5 minutes. You can use Reachability to determine what type of network you are on and perhaps warn the user if they're on cellular (if the amount of data approaches this threshold).
Have you considered using a third party framework - such as AFNetworking - and limiting the number of asynchronous calls happening at once? Perhaps this might help / solve your problem.
In particular, you might consider creating a networking manager class that creates and manages AFHTTPClient(s), which in turn manages AFHTTPRequestOperations, for each endpoint (base URL) you hit.

How can I determine the quality of a connection in iOS?

I'm familiar with using Reachability to determine the type of internet connection (if any) being used on an iOS device. Unfortunately that's not a decent indicator of connection quality. Wifi with low signal strength is pretty sketchy and 3G with anything less than 3 bars is a disaster (not to mention networks that only allow EDGE connections).
How can I determine the quality of my connection so I can help my users decide if they should be downloading larger files on their current connection?
A pragmatic approach would be to download one moderately large-sized file hosted on a reliable, worldwide CDN, at the start of your application. You know the filesize beforehand, you just have to measure the time it takes, make a simple computation and then you've got your estimate of the quality of the connection.
For example, jQuery UI source code, unminified, gzipped weighs roughly 90kB. Downloading it from http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.14/jquery-ui.js takes 327ms here on my Mac. So one can assume I have at least a decent connection that can handle approximately 300kB/s (and in fact, it can handle much more).
The trick is to find the good balance between the original file size and the latency of the network, as the full download speed is never reached on a small file like this. On the other hand, downloading 1MB right after launching your application will surely penalize most of your users, even if it will allow you to measure more precisely the speed of the connection.
Cyrille's answer is a good pragmatic answer, but is not really in the end a great solution in the mobile context for these reasons:
It involves doing a test "at the start of your application" by which I assume he means when your app launches. But your app may execute for a long while, may go background and then back into the foreground, and all the while the user is changing network contexts with changes in underlying network performance - so that initial test result may bear no relationship to the "current" performance of the network connection.
For the reason he rightly points out, that it is "penalizing" your user by making them download a test file over what may already be constrained network conditions.
You also suggest in your original post that you want your user to decide if they should download based on information you present to them. But I would suggest that this is not a good way to approach interacting with mobile users - that you should not be asking them to make complicated decisions. If absolutely necessary, only ask if they want to download the file if you think it may present a problem, but keep it that simple - "Do you want to download XYZ file (100 MB)?" I personally would even avoid even that.
Instead of downloading a test file, the better solution is to monitor and adapt. Measure the performance of the connection as you go along, keep track of the "freshness" of that information you have about how well the connection is performing, and only present your user with a decision to make if based on the on-going performance of the connection it seems necessary.
EDIT: For example, if you determine a patience threshold that in your opinion represents tolerable download performance, keep track of each download that the user does in order to determine if that threshold is being reached. That way, instead of clogging up the users connection with test downloads, you're using the real world activity as the determining factor for "quality of the connection", which is ultimately about the end-user experience of the quality of the connection. If you decide to provide the user with the ability to cancel downloads, then you have an excellent "input" about the user's actual patience threshold, and can adapt your functionality to that situation, by subsequently giving them the choice before they start the download. If you've flipped into this type of "confirmation" mode, but then find that files are starting to download faster, you could dynamically exit the confirmation mode.
Rob's answer is very good, but for a more specific implementation start with (https://developer.apple.com/library/archive/samplecode/SimplePing/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000716)Apple's Simple Ping example source code
Target the domain for the server that you want to monitor connection quality to. Use the ping library to "ping" it on a regular basis (say 1 or 10 seconds depending upon your UI needs). Measure how long it takes to get a response to your ping (or if it never returns) to develop an estimate of the connection quality to communicate to your user.

Why not compile shaders on a background thread?

I've been learning OpenGL ES 2.0/GLSL and related iOS quirks by looking at code and developer videos and I've noticed that there's never any mention of asynchronous shader compilation. Aside from instructors, writers, or salesmen (er, engineers) worrying about adding complexity to their examples, is there a reason for that?
For example, most web data retrieval tutorials hammer home the need for doing some sort of gymnastics (pthreads, NSOperation, GCD, baked in asynch instance methods, etc) to keep from blocking the main thread- why would blocking an app launch be considered acceptable?
It can be a little bit tricky to synchronize two EAGLContext's, beside that, there is nothing against loading this kind of stuff in the background (generally, loading every kind of asset, textures, shaders etc).
Probably the real reasons are that most people think of OpenGL (ES) as something monolithic that only works on one single thread or they never had an issue with loading times that made it worth to load stuff in a background thread or they just don't care (for some people its probably everything together).
For your last question: Networking can add a HUGE latence and with "can" I mean "will". Resource loading isn't that problematic, compared to a network access, loading a shader or texture takes way less time and its already known ahead how much time it will take in the normal case. Plus, people are used to loading screens in game while they don't want to see loading screens when they scroll a table view just so that your application can fetch a picture from a server that doesn't respond.

Resources