I've been learning OpenGL ES 2.0/GLSL and related iOS quirks by looking at code and developer videos and I've noticed that there's never any mention of asynchronous shader compilation. Aside from instructors, writers, or salesmen (er, engineers) worrying about adding complexity to their examples, is there a reason for that?
For example, most web data retrieval tutorials hammer home the need for doing some sort of gymnastics (pthreads, NSOperation, GCD, baked in asynch instance methods, etc) to keep from blocking the main thread- why would blocking an app launch be considered acceptable?
It can be a little bit tricky to synchronize two EAGLContext's, beside that, there is nothing against loading this kind of stuff in the background (generally, loading every kind of asset, textures, shaders etc).
Probably the real reasons are that most people think of OpenGL (ES) as something monolithic that only works on one single thread or they never had an issue with loading times that made it worth to load stuff in a background thread or they just don't care (for some people its probably everything together).
For your last question: Networking can add a HUGE latence and with "can" I mean "will". Resource loading isn't that problematic, compared to a network access, loading a shader or texture takes way less time and its already known ahead how much time it will take in the normal case. Plus, people are used to loading screens in game while they don't want to see loading screens when they scroll a table view just so that your application can fetch a picture from a server that doesn't respond.
Related
I'm downloading images using [SDWebImageDownloader.sharedDownloader downloadImageWithURL] with options set to 0. I'm initially not doing anything with them, with the understanding that they will be cached. However, when I use the exact same function to later display an image, the function is downloading the image again, rather than getting it from the cache (the image cache type is 0). In both cases, the url of the image is the same. Is my understanding regarding caching incorrect?
The easiest way to enjoy cache functionality is to use SDWebImageManager instead of SDWebImageDownload. SDWebImageManager provides the SDImageCache functionality, whereas if you use SDWebImageDownload, you'll have to rely upon NSURLCache (which has limitations/issues) or write your own cache code.
Also (and implicit in Gustavo's question), if you're just trying to set the image of a UIImageView, it's actually even better to not use either of those classes, and use the UIImageView+WebCache category instead. It enjoys all of the cache abilities of SDWebImageManager, but also offers other advantages (esp for re-used UITableViewCell and UICollectionViewCell objects).
In a comment to another user, you say that you're downloading all of the images in advanced, "just to get them cached, so that when the user actually does want to see an image he doesn't have to wait."
That is a great stretch objective, but this sort of prefetch (sometimes call eager loading, in contrast to the more common lazy loading) has a couple of implications:
Unless you're confident that the user really will need all of the images, this is an aggressive use of their mobile device's cellular data plan, so maybe you should only do this if on WiFi (which can be determined by Reachability). Apple has even rejected apps for using too much cellular bandwidth.
The app will be more aggressive than necessary in terms of memory (causing more suspended apps to be terminated, which doesn't affect the UX for your app, but Apple asks us all to be good citizens and not use more RAM than we need). Again, if the user was going to need all of the images, then it's a fine thing to do, but if not, one should really minimize memory consumption, not loading the cache up with stuff that might not be needed for the current session. Also note that downloading a bunch of stuff that might need to be downloaded, but was done simply as a precaution has (modest) battery implications, too.
If you do a lot of requests for background data, make sure you're not using up all of the limited the network connections (you only have five) and backlogging the system with a lot of requests. The nice thing is that the UI UIImageView category naturally favors the current UI (being, fundamentally, a lazy-loading mechanism). But let's say there are 100 images, and the user fires the app and scrolls down to the bottom of the list. Do you really want the request for #90 (which is on screen and the user is waiting for) to wait for #1-89 to finish?
See WWDC 2012 video Asynchronous Design Patterns with Blocks, GCD, and XPC, section 7, "Separate control and data flow", about 48 min into the video for a discussion of how this is problematic.
If nothing else, I'd make sure that you test the app using the network link conditioner (part of the hardware IO tool for MacOS or under the Settings > General > Developer on the device). So turn on the network link conditioner, remove and reinstall the app (to empty the persistent storage cache), and then fire up the app with this slow connection, try navigating around while the image loading is in progress. A simple "let's kick off a prefetch of everything" may not offer the necessary prioritization of the current UI on a slow network that you really want.
All of this said, you may have thought through all of these implications, and if so, I apologize for belaboring the obvious. It's just that one has to be careful before implementing an aggressive pre-fetch of all images.
WebGL is nice and asynchronous in that you can send off a long list of rendering commands without waiting for them to complete. However, if for some reason you do need to wait for the rendering to complete, you have to do it synchronously with gl.finish(). Surely it would be better if gl.finish accepted a callback and returned immediately?
Question: Is there any way to emulate this reliably?
Usage case: I am rendering a large number of vertices to a large off-screen canvas and then using drawImage to copy sections of this large canvas to small canvases on the page. I don't actually use gl.finish() but drawImage() seems to have the same effect. In my application, re-rendering is only triggered when the user performs an action (e.g. clicking a button), and it may take several hundred milliseconds. It would be nice if during rendering the browser was still responsive allowing scrolling etc. I am looking in particular for a Chrome solution, though something that also works in Firefox and Safari would be good.
Possible (bad) answer: You could try and estimate how long rendering is going to take and then set a timeout that begins with the call to gl.finish(). However, reliably doing this estimation for all sizes of vertex buffer and all users is going to be pretty tricky and inaccurate.
Possible (non-)answer: requestAnimationFrame does what I'm looking for...it doesn't though, does it?
Possible answer in 2018: Perhaps the ImageBitmap API solves this problem - see MDN docs.
You've already partially hit on your answer: drawImage() does indeed have finish-like behavior in that it forces all outstanding drawing commands to complete before it reads back the image data. The problem is that even if gl.finish() did what you wanted it to, wait for rendering to complete, you would still have the same behavior using it as you do now. The main thread would be blocked while the rendering finishes, interrupting the user's ability to interact with the page.
Ideally what you would want in this scenario is some sort of callback that indicates when a set of draw commands have been completed without actually blocking to wait for them. Unfortunately no such callback exists (and it would be surprisingly difficult to provide one, given the way the browser's internals work!)
A decent middle-ground in your case may be to do some intelligent estimations of when you feel the image may be ready. For example, once you have dispatched your draw call spin through 3 or 4 requestAnimationFrames before you call drawImage. If you consistently observe it taking longer (10 frames?) then spin for longer. This would allow users to continue interacting with the page normally and either produce no delay when doing the draw image, because the contents have finished rendering, or much less delay because you do the synchronous step mid-way through the render. Depending on the intended usage of your site non-realtime rendering could probably even stand to spin for a full second or so before presenting.
This certainly isn't a perfect solution, and I wish I had a better answer for you. Perhaps WebGL will gain the ability to query this type of status in the future, because it would be valuable in cases like yours, but for now this is likely the best you can do.
I am doing a UITableview to download data
the name webservice is very fast, so I use it to populate the table initially, then I start an operation queue for the image.
Then a seperate queue for the rest of data because it loads very slow but that effects the image load time, How can I do the 2 concurrently.
Can you figure out whats slowing the performance there and help me fix it?
As I assume you know, you can specify how many concurrent requests by setting maxConcurrentOperationCount when you create your queue. Four is a typical value.
self.imageDownloadingQueue.maxConcurrentOperationCount = 4;
The thing is, you can't go much larger than that (due to both iOS restrictions and some server restrictions). Perhaps 5, but no larger than that.
Personally, I wouldn't use up the limited number of max concurrent operations returning the text values. I'd retrieve all of those up front. You do lazy loading of images because they're so large, but text entries are so small that the overhead of doing separate network requests starts to impose its own performance penalties. If you are going to do lazy loading of the descriptions, I'd download them in batches of 50 or 100 or so.
Looking at your source code, at a very minimum, you're making twice as many JSON requests as you should (you're retrieving the same JSON in getAnimalRank and getAnimalType). But you really should just alter that initial JSON request to return everything you need, the name, the rank, the type, the URL (but not the image itself). Then in a single call, you get everything you need (except the images, which we'll retrieve asynchronously, and which your server is delivery plenty fast for the UX). And if you decide to keep the individual requests for the rank/type/url, you need to take a look at your server code, because there's no legitimate reason that shouldn't come back instantaneously, and it's currently really slow. But, as I said, you really should just return all of that in the initial JSON request, and your user interface will be remarkably faster.
One final point: You're using separate queues for details and image downloads. The entire purpose in using NSOperationQueue and setting maxConcurrentOperationCount is that iOS can only execute 5 concurrent requests with a given server. By putting these in two separate queues, you're losing the benefit of maxConcurrentOperationCount. As it turns out it takes a minute for requests to time out, so you're probably not going to experience a problem, but still, it reflects a basic misunderstanding of the purpose of the queues.
Bottom line, you should have only one network queue (because the system limitation is how many network concurrent connections between your device and any given server, not how many image downloads and, separately, how many description downloads).
Have you thought about just doing this asyncronously? I wrote a class to do something very similar to what you describe using blocks. You can do this two ways:
Just load async whenever cellForRowAtIndexPath fires. This works for lots of situations, but can lead to the wrong image showing for a second until the right one is done loading.
Call the process to load the images when the dragging has stopped. This is generally the way I do things so that the correct image always shows where it should. You can use a placeholder image until the image is loaded from the web.
Look at this SO question for details:
Loading an image into UIImage asynchronously
I have an application that can run for quite a long time scanning a database.
During this process I keep my program responsive by using processmessage.
This processmessage is triggered when my progress bar is updated and inc'ed.
This works fine is most cases, but when the databases get larger it takes longer for the progress bar to jump up 1%, the program becomes unresponsive until that time.
Is there another way to keep my program alive besides processmessages?
Multi threading is the answer. A standard Delphi application is basically a single threaded application that can do one thing at a time. Hence the gui lockup, it can't remain responsive if it's doing something else.
If you want to have a responsive gui and do heavy lifting at the same time, you need to have the heavy lifting in a separate thread or threads. This way your main thread can make sure you have a responsive program and the worker threads do the heavy lifting.
This works nice for heavy database work but also for for instance the downloading of files or situations where an answer of for instance a remote server can take a long time.
But this answer will probably give you more questions then answers because to explain HOW to use multi threading would be too big of an explanation for this question.
One other thing though: have a long and hard look at your database code. How are you retrieving records from the database, are there good indexes on the database etc. etc. etc. You can get insane speed improvements by optimizing this code before you have to start thinking about multi threading.
I've found the following resource: http://thaddy.co.uk/threads/ which you can download with pictures at: http://cc.embarcadero.com/item/14809 to be very usefull threading tutorial.
If you want to make your GUI program appear responsive, you must service the message queue in a timely fashion. There is no alternative.
When it comes to running database queries, the way to do that without freezing your UI, is to move the query to a different thread.
I have an iPhone app which pretty much is a mobile app for a website. Pretty much everything it does is call API methods from our server. The app retrieves the user's information, and keeps updating the server using the API.
My colleague and I had a discussion whether to introduce GCD to the downloading aspect on the app. My colleague argues that since the UI needs to wait for the download to complete before it can display the pictures, text or whatever, there is absolutely no need for GCD. My argument is that we should keep the main thread busy with UI rendering (even if there is no data), and introduce GCD to the app to create other threads for download.
Which argument is right here? In my case, if the UI renders with no data, will there be some sort of lag? Which will yield a cleaner, sleeker and faster app?
One argument would be : what will happen when the download fails and times out because there is a problem at the server end ?
Without GCD the app will remain blocked and will crash after a time
out since the UI can not be blocked for longer than 20 seconds.
With GCD the application remains functional but there will be no data
being downloaded and the application will not crash.
Other aspects to take into account are :
the thread safety of the objects that you are using
how you handle downloads that are no longer necessary because the user navigates away from the page
I don't think doing time consuming operations in the main thread is a good idea.
Even if user have to wait for the data te be downloaded before he can do anything meaningful, still he will not hope UI is blocked.
Let's assume you have a navigator view, and after user tap some button, you push a new view to it and start download something. If user suddenly decides he don't want to wait anymore, he tap the "back" button. If your downloading operation blocks UI, user will have to wait it to end, it's really bad.
A more appropriate question would perhaps be if you should download asynchronously or on the main thread for your app, since there are several different methods to download asynchronously on iOS (e.g. using NSThread, NSOperation or indeed GCD). An easy approach to achieve your goals could be to use the AFNetworking library. It makes multithreaded networking / internet code very easy to implement and understand.
Personally I'm very fond of GCD and recommend you learn it someday soon, though in itself it is not as suitable for asynchronous downloading compared to a library like AFNetworking (that uses GCD under the hood, I believe).
Here is a good read on using NSOperationQueues (that uses GCD behind the scenes) to download images. There is also some Github code you can check out. They have an elegant solution to pause downloads and enqueue new downloads when the user moves to different parts of your app.
http://eng.alphonsolabs.com/concurrent-downloads-using-nsoperationqueues/?utm_medium=referral&utm_source=pulsenews
Use GCD / NSOperationQueues as opposed to using NSThreads. You will have a good learning on core fundamentals and at the same time create a well architectured app. :)