I am doing a UITableview to download data
the name webservice is very fast, so I use it to populate the table initially, then I start an operation queue for the image.
Then a seperate queue for the rest of data because it loads very slow but that effects the image load time, How can I do the 2 concurrently.
Can you figure out whats slowing the performance there and help me fix it?
As I assume you know, you can specify how many concurrent requests by setting maxConcurrentOperationCount when you create your queue. Four is a typical value.
self.imageDownloadingQueue.maxConcurrentOperationCount = 4;
The thing is, you can't go much larger than that (due to both iOS restrictions and some server restrictions). Perhaps 5, but no larger than that.
Personally, I wouldn't use up the limited number of max concurrent operations returning the text values. I'd retrieve all of those up front. You do lazy loading of images because they're so large, but text entries are so small that the overhead of doing separate network requests starts to impose its own performance penalties. If you are going to do lazy loading of the descriptions, I'd download them in batches of 50 or 100 or so.
Looking at your source code, at a very minimum, you're making twice as many JSON requests as you should (you're retrieving the same JSON in getAnimalRank and getAnimalType). But you really should just alter that initial JSON request to return everything you need, the name, the rank, the type, the URL (but not the image itself). Then in a single call, you get everything you need (except the images, which we'll retrieve asynchronously, and which your server is delivery plenty fast for the UX). And if you decide to keep the individual requests for the rank/type/url, you need to take a look at your server code, because there's no legitimate reason that shouldn't come back instantaneously, and it's currently really slow. But, as I said, you really should just return all of that in the initial JSON request, and your user interface will be remarkably faster.
One final point: You're using separate queues for details and image downloads. The entire purpose in using NSOperationQueue and setting maxConcurrentOperationCount is that iOS can only execute 5 concurrent requests with a given server. By putting these in two separate queues, you're losing the benefit of maxConcurrentOperationCount. As it turns out it takes a minute for requests to time out, so you're probably not going to experience a problem, but still, it reflects a basic misunderstanding of the purpose of the queues.
Bottom line, you should have only one network queue (because the system limitation is how many network concurrent connections between your device and any given server, not how many image downloads and, separately, how many description downloads).
Have you thought about just doing this asyncronously? I wrote a class to do something very similar to what you describe using blocks. You can do this two ways:
Just load async whenever cellForRowAtIndexPath fires. This works for lots of situations, but can lead to the wrong image showing for a second until the right one is done loading.
Call the process to load the images when the dragging has stopped. This is generally the way I do things so that the correct image always shows where it should. You can use a placeholder image until the image is loaded from the web.
Look at this SO question for details:
Loading an image into UIImage asynchronously
Related
Currently what I want to achieve is download files from an array that download only one file at a time and it still performs download even the app goes to the background state.
I'm using Rob code as stated in here but he's using URLSessionConfiguration.default which I want to use URLSessionConfiguration.background(withIdentifier: "uniqueID") instead.
It did work in the first try but after It goes to background everything became chaos. operation starts to download more than one file at a time and not in order anymore.
Is there any solution to this or what should I use instead to achieve what I want. If in android we have service to handle that easily.
The whole idea of wrapping requests in operation is only applicable if the app is active/running. It’s great for things like constraining the degree of concurrency for foreground requests, managing dependencies, etc.
For background session that continues to proceed after the app has been suspended, though, none of that is relevant. You create your request, hand it to the background session to manage, and monitor the delegate methods called for your background session. No operations needed/desired. Remember, these requests will be handled by the background session daemon even if your app is suspended (or if it terminated in the course of its normal lifecycle, though not if you force quit it). So the whole idea of operations, operation queues, etc., just doesn’t make sense if the background URLSession daemon is handling the requests and your app isn’t active.
See https://stackoverflow.com/a/44140059/1271826 for example of background session.
By the way, true background sessions are really useful when download very large resources that might take a very long time. But it introduces all sorts of complexities (e.g., you often want to debug and diagnose when not connected to the Xcode debugger which changes your app lifecycle, so you have to resort to mechanisms like unified messaging; you need to figure out how to restore UI if the app was terminated between the time the requests were initiated and when they finished; etc.).
Because of this complexity, you might want to consider whether this is absolutely needed. Sometimes, if you only need less than 30 seconds to complete some requests, it’s easier to just ask the OS to keep your app running in the background for a little bit after the user leaves the app and just use standard URLSession. For more information, see Extending Your App's Background Execution Time. It’s a much easier solution, bypassing many background URLSession hassles. But it only works if you only need 30 seconds or less. For larger requests that might exceed this small window, a true background URLSession is needed.
Below, you asked:
There are some downside with [downloading multiple files in parallel] as I understanding.
No, it’s always better to allow downloads to progress asynchronously and in parallel. It’s much faster and is more efficient. The only time you want to do requests consecutively, one after another, is where you need the parse the response of one request in order to prepare the next request. But that is not the case here.
The exception here is with the default, foreground URLSession. In that case you have to worry about latter requests timing out waiting for earlier requests. In that scenario you might bump up the timeout interval. Or we might wrap our requests in Operation subclass, allowing us to constrain not only how many concurrent requests we will allow, but not start subsequent requests until earlier ones finish. But even in that case, we don’t usually do it serially, but rather use a maxConcurrentOperationCount of 4 or something like that.
But for background sessions, requests don’t time out just because the background daemon hasn’t gotten around to them yet. Just add your requests to the background URLSession and let the OS handle this for you. You definitely don’t want to download images one at a time, with the background daemon relaunching your app in the background when one download is done so you can initiate the next one. That would be very inefficient (both in terms of the user’s battery as well as speed).
You need to loop inside an array of files and then add to the session to make it download but It will be download asynchronously so it's hard to keeping track also since the files are a lot.
Sure, you can’t do a naive “add to the end of array” if the requests are running in parallel, because you’re not guaranteed the order that they will complete. But it’s not hard to capture these responses as they come in. Just use a dictionary for example, perhaps keyed by the URL of the original request. Then you can easily look up in that dictionary to find the response associated with a particular request URL.
It’s incredibly simple. And we now can perform requests in parallel, which is much faster and more efficient.
You go on to say:
[Downloading in parallel] could lead the battery to be high consumption with a lot of requests at the same time. that's why I tried to make it download each file one at a time.
No, you never need to perform downloads one at a time for the sake of power. If anything, downloading one at a time is slower, and will take more power.
Unrelated, if you’re downloading 800+ files, you might want to allow the user to not perform these requests when the user is in “low data mode”. In iOS 13, for example, you might set allowsExpensiveNetworkAccess and allowsConstrainedNetworkAccess.
Regardless (and especially if you are supporting older iOS versions), you might also want to consider the appropriate settings isDiscretionary and allowsCellularAccess.
Bottom line, you want to make sure that you are respectful of a user’s limited cellular data plan or if they’re on some expensive service (e.g. connecting on an airplane’s expensive data plan or tethered via some local hotspot).
For more information on these considerations, see WWDC 2019 Advances in Networking, Part 1.
I am using Alamofire as my networking library for my Swift app. Is there a way to keep a "priority queue" of network requests with Alamofire? I believe I saw this feature in a library in the past but I can no longer find it or find other posts about this.
Let's say I open a page in my application and it starts to make a few requests. First it gets some JSON, which is fast and no problem.
From that JSON, it pulls out some information and then starts downloading images. These images have the potential to be quite large and take many seconds (~30 seconds or more sometimes). But the tricky part is that the user has the option to move on to the next page before the image(s) finish downloading.
If the user moves on to the next page before the image downloading is done, is it possible to move it on to a lower priority queue? So that when the images on the next page start loading they will go faster? I would even be open to pausing the old one entirely until the new requests are finished if that is even possible.
Keep in mind I am open to many suggestions. I have a lot of freedom with my implementation. So if this is a different library, or different mechanism in iOS that is fine. Even if I continue to use Alamofire for JSON and do all my image downloading and management with something else that would be alright too.
Also, probably irrelevant but I will add it here. I'm using https://github.com/rs/SDWebImage for caching my images once they're fully downloaded. Which is why I don't want to cancel the request completely. I need it to finish and then it won't happen again.
TL;DR I want a fast queue and a slow queue with the ability to move things from the fast queue to the slow queue before they are finished.
Have you considered managing a NSOperationQueue? This tutorial might be helpful. In his example, he pauses the downloads as they scroll off the page, but I believe you could adjust the queuePriority property of the NSOperation objects instead.
Here is the problem that I got. I have several tasks to complete in background when application is running. When I run these tasks in background by pushing them to concurrent dispatch queue it takes more then 10 seconds to complete all of them. They basically load data from disk and parse it and represent the result to the user. That is they are just cached results and hugely improve the user experience.
This cached results are used in a particular functionality inside the app, and when that functionality is not used immediately after opening the application, it is not a problem that it takes 10 seconds to load the data that supports that functionality, because when user decides to use it, that data will already be loaded.
But when user immediately enters that function in the app after opening it, it takes considerable time (from the point of view of the user) to load the data. Also the whole data is not needed at the same moment, but rather the piece of it at a given moment.
That's why we need concurrently load the data, and if possible bring the results as soon as possible. That's why I decided to break the data into chunks, and when user requests the data, we should load the corresponding chunk by background thread and give that thread the highest priority. I'll explain what I mean.
Imagine there are 100 pieces of data and it takes more than 10 seconds to load them all. Whenever user queries the data first time, the app determines which chunk of the data user needs and starts loading that chunk. After that part is loaded the remaining data will also be loaded in the background, in order to make later queries faster (without the lag of loading the cache). But here a problem occurs, when user decides to change the query immediately after he has already entered one, and that change occurs for instance on the 2nd second of data loading process (remember it takes more than 10 seconds to load the data and we still have more than 8 seconds to complete the loading process), then in the extreme case user will receive his data waiting until all data will be loaded. That's way I need somehow manage the execution of the background tasks. That is, when user changes the input, I should change the priorities of execution, and give the thread that loads the corresponding chunk the highest priority without stopping it, so it will receive more processor time, and will finish sooner, and deliver results to the user faster, than it would if I have left the priorities the same. I know I can assign priorities to queues. But is there a way that I can change them dynamically while they are still executing?
Or do I need to implement custom thread management, in order to implement these behaviour? I really don't want to dive into thread management, and will be glad if it is possible to implement using only dispatch or operation queues.
I hope I've described the problem well. If not please comment bellow what is unclear, I'll explain.
Thank you so much for reading so far :) And special thanks to one who will provide an answer. And very special thanks to one, who will give me solution using dispatch or operation queues :)))
I think you need to move away from thinking about the priority at which the queues are running (which actually doesn't sound very important for the scenario you are describing) and more towards how you can use Dispatch I/O or an even simpler Dispatch source to control how the data is being read in. As you say, it takes 10 seconds the load the data and if the user suddenly changes their query immediately after asking, you need to essentially stop reading the data for the previous request and do whatever needs to be done to fulfill the most recent query. Using Dispatch I/O to chunk the data (asynchronously) and update the UI also asynchronously will allow you to change your mind mid-stream (using some sort of semaphore or cancellation flag) and either continue to trickle the data in (you don't say whether or not that data will remain useful if the user changes their mind or not), suspend the reading process, or cancel it altogether and start a new operation. Eithe way, being able to suspend/resume a source and also have it fire callbacks for reasonably small chunks of data will certainly enable you to make decisions on a much more granular chunk of time than 8 seconds!
I'm afraid the only way to do that is to cancel running operation before starting new one.
You cannot remove it from queue until it's done or canceled.
As an improvement for your problem I would suggest to load things even user doesn't need them in background - so you can load them from cache after it's there.
You can create 2 NSOperationQueue with 2 different priorities and download things in background whenever user is idle on LowPriorityQueue. For important operations you can have high priority queue - which you will cancel each time search term changes.
On top of that you just need to cache results from both of those queues.
I have an NSArray of links. I want to parse through them with an online article extractor API (Clear Read), and with the result given back for each article (some HTML) I throw it into an NSString.
My problem arises from the fact that, say my array has 100 URLs in it, I loop through the array shooting each item into the API and getting back some results in JSON. This is firing like 100 NSURLConnection calls at once asynchronously.
I wasn't sure if that'd be a problem, but when I give it 100 URLs (real strings, none are nil) the data that comes back often has either empty values for the JSON keys (when they shouldn't), or the data coming back is nil. There's also a bunch of duplicates.
Should I be handling multiple asynchronous connections better than I am now? If so, how?
A couple of thoughts:
If you're doing concurrent asynchronous requests and are using asynchronous NSURLConnection, then you'll want to define your own class for this download operation to make sure that every connection keeps track of its own properties. That way, everything can be encapsulated within this class where the resulting download objects can keep track of what's downloaded, what's been parsed, etc. If you're not using asynchronous NSURLConnection (e.g. you're just using dataWithContentsOfURL), it's even easier, though you lose some of the progress updates that NSURLConnection provides and/or streaming opportunities.
For best performance, you should do concurrent requests. Having said that, you should not have more than four or five concurrent requests going to any particular server. This is an iOS imposed constraint, and especially if you have a slow network connection, you risk having connections timeout otherwise.
If you're doing preliminary testing on the simulator, you may want to make sure you try out the "network link conditioner". It's part of the "Hardware IO Tools for Xcode", available at the Downloads for Apple Developers. There are issues (such as the aforementioned timeout problems if you have too many concurrent requests going to a particular server) that only manifest themselves in slow connections.
Having said that, you also want to make sure to test your solution on a device with real world network speeds. It's easy to successfully run massively parallel tasks successfully on the simulator that are too greedy for the device. Limiting the number of concurrent sessions to five will diminish this resource problem, but it should be part of your testing strategy.
I agree with JRG-Developer, that you should look into established frameworks, such as AFNetworking. Make sure to set the maxConcurrentOperationCount for the queue of the AFHTTPClient, though, if queueing 100 plus operations.
I don't know how much data your 100 requests entail, but be forewarned that the app approval process has been known to reject apps that make extraordinary networks requests on cellular networks. What constitutes excessive cellular network activity is not explicitly stated in the app review guidelines, though Avoiding iPhone App Rejection From Apple has claimed that you should ensure that you don't exceed more than 4.5mb in 5 minutes. You can use Reachability to determine what type of network you are on and perhaps warn the user if they're on cellular (if the amount of data approaches this threshold).
Have you considered using a third party framework - such as AFNetworking - and limiting the number of asynchronous calls happening at once? Perhaps this might help / solve your problem.
In particular, you might consider creating a networking manager class that creates and manages AFHTTPClient(s), which in turn manages AFHTTPRequestOperations, for each endpoint (base URL) you hit.
Are there any guidelines, benchmarks or similar on what you can expect in terms of loading times, when you're loading a number of tracks (using SPAsyncLoading waitUntilLoaded:timeout:then)?
The reason for me asking is that I've tried loading ~20 tracks in a batch, since the track metadata's going to be used throughout a flow in my client (which is split up in steps, in between which I'd rather not have any loading times aside from an initial delay, which is more acceptable to the user in this case).
I.e. how many tracks are reasonable to attempt loading at once and how long it "should" take?
Is loading 20 tracks "a lot"? Does loading a single track itself trigger a number of new request to pull in the metadata, making it a very bad idea trying to load more than a handful at once? And is there any way of finding out why loading of a track failed, aside from it just timing out?
Rather often (maybe 1 out of 10), loading fails after the default 20 s timeout (and I've tried with longer timeouts, without big differences). Sometimes all 20 tracks have failed to load, sometimes only a single track.
I daresay that my internet connection stays the same (as much as it could, anyway) in between these attempts (in between which it can be a minute or two).
I realize that there's a lot of fuzzy input here on what's normal and what's not etc. And this obviously depends on a number of factors such as your internet connection, the status of the Spotify servers and such, but maybe it's possible to give some kind of hint on what to expect and if there are any do's and don't specific to the Spotify API.
Generally, the rule is: only load tracks whose metadata you need to display to the user right now, and perhaps pre-load track that'll be displayed next.
In addition, since CocoaLibSpotify uses a queue, the more load you place on the library, the more overhead you'll have. For example, if you separately SPAsyncLoading a bunch of stuff they'll all be separately queued, but their timers will start instantly. If you enqueue enough stuff, they might not even start loading by the time their timeout fires.
However, since a bunch of queueing goes on internally, this also happens if you throw a ton of stuff into a single SPAsyncLoading call.
To keep things fast, keep it light and try to follow the guideline in the first sentence. In addition, try and keep things efficient:
SPAlbumBrowse will load that album's track metadata in one hit, reducing backend load and queueing time.
Same for SPArtistBrowse.
Same for SPPlaylist, I believe.