im pulling in images from the net, and want to manipulate them a bit such as adding perspective with CATransform3D, and compositing a couple together. After im done, I would like to save the file in memory so they can be pulled up when needed (like in a tableview cell for example). I managed to extract the image from the web, and manipulate them by making a CALayer. After a bit of reading, im a bit confused as to how to properly do this since these images arent displayed until needed and I obviously would like to do my work on a worker thread so the system wont lag. What would the best procedure be?
Apple recommends that you almost never try to cache images yourself since they cache them internally and you can be guaranteed that the cache will function properly even under high memory pressure.
You can cache an image using Apple's internal cache via the setName: and imageNamed: methods`. Furthermore, you should save a local copy of the image to disk in the caches directory so you dont need to download it again if the cache gets cleared.
So, in summary, use imageNamed:, if that is nil check the disk cache directory, if that is nil download the image. Caching a CALayer will make sooo much dirty memory,
Related
This is a rather simple question, but I haven't been able to pinpoint a clear answer in my searching.
If I have an NSArray, and add fifty 1MB UIImages to it, where does that 50MB get deducted from? Will the app be using 50MB more memory? Will it simply store it on the disk?
The same goes for Core Data where instead of using a persistent store I store it in memory. Would the size of the Core Data store take up exactly that much memory/RAM or would it live on the disk and be wiped when the app finishes executing?
I'm concerned whether or not I should be storing several dozen megabytes in UIImages in an NSArray, or if I should be using NSCache (I'd rather not as I'd prefer to never lose any of the images).
If I have an NSArray, and add fifty 1MB UIImages to it, where does
that 50MB get deducted from? Will the app be using 50MB more memory?
Yes.
Will it simply store it on the disk?
No. Arrays are stored in memory.
The same goes for Core Data where instead of using a persistent store
I store it in memory. Would the size of the Core Data store take up
exactly that much memory/RAM or would it live on the disk and be wiped
when the app finishes executing?
Yes, if you tell Core Data to story everything in memory, that's exactly what it'll do.
The line between "memory" and "disk" can get a little fuzzy if you consider that virtual memory systems can swap pages of real memory out to disk and read them back when they're needed. That's not an issue for iOS, however, as iOS doesn't provide a VM backing store and writeable memory is never swapped out.
I'm concerned whether or not I should be storing several dozen
megabytes in UIImages in an NSArray, or if I should be using NSCache
Those aren't your only options, of course. You could store the images in files and read them in as needed. You should be thoughtful about the way your app uses both memory and disk space, of course, but you also need to consider network use, battery use, and performance. Storing data on disk is often preferable to downloading it again because downloading takes time, may impact the user's data plan, and uses a lot more energy than just reading data from secondary storage.
I have a method in my app that builds UIImages with specific colors. Since most likely the same colored image will be created multiple times, I would like to cache that UIImage, then use the cached version rather than building a new one if that specific color is needed.
This is NOT caching of remote images from the web, these are locally created images.
What is the best method to do this? From disk or just save the UIImage objects into an NSDictionary? What about NSCache?
** I would prefer not to have to use library for this. Looking for a simple solution.
It depends how many images you have and how frequently and concurrently each is used.
If you have a set of images which are all used frequently then NSDictionary is a good choice as it will keep all the images in memory. If you do get a memory warning you can always remove all of the images and then regenerate them when required.
As you're generating the images in code it seems like caching to disk won't be so useful, but that depends on how complex the images are. Again NSDictionary can be used for an in memory cache, then fail out to disk if nothing in the dict, then recreate if all else fails.
The NSCache route offers you some multi-threading benefits (if you'd use them) but is generally similar to the NSDictionary route. You have a little less control as the memory management is handled for you so it's possible that the cache could decide to destroy some of your images more frequently than you might if you manage it explicitly.
In any case you only need a handful of lines on top of your current generation code.
Now, I am developing a news reader app like BBC news iOS.
see in BBC News
In my app, I must download image from server to and show it in view to make users easier to choose the news they want to read.
For more performance, I must cache image to avoid reloading image for server.
I know that there are 2 kinds of cache: In-memory cache that saving images in memory (RAM) and DiskCach that save images in Disk to load it when we need.
My question is:
What is best images cache mixed strategies for my App? (use both in-memory cache and image-cache)
My solution is:
download image --> save them in diskcache + save them in memory cache --> load image from in-memory cache on demand and show in view ---> in-memory cache over its MAX_SIZE --> free in-memory cache ---> load image from disk cache on demand and save it to memory cache --> repeat........
Is my solution is right approach?
Another question: when in-memory cache over its MAX_SIZE --> we will free its --> all images in cache will lose so image in our view will disappear.
--> How to solve this problem?
Sorry for poor English.
Thank in advance.
In one of my projects I implemented pretty much the same caching methods (Disk Cache and Memory Cache).
Maximum cache size
Each cache system had its own max size limit. The "size" of each image was computed differently in the cache systems.
For the memory cache, each image would have a size computed as
image size = image width * image height (in pixels)
So, the maximum size for the memory cache would represent a the maximum area of a pixel surface
For the disk cache, I used the actual file size for each file.
Making room
When using the cache systems, you might get to a situation where one of the caches is full and you want to insert a new item in it - you have to remove some items to make room.
What I did was assign a timestamp to each entry in the cache. Every time I access that item I updated the timestamp. When you want to make room, you just need to start removing items from the oldest to the newest based on the last access timestamp.
This is a simple algorithm for freeing up space and in some cases might actually behave poorly. It is up to you to experiment and see if you need something more advanced than this.
For example, you could improve this method by adding a priority value for each item and keep old items in the cache if their priority is high.
Again, it depends on your app's needs.
Expiration
For the disk cache, I would definitely add an expiration date for each entry. If the memory cache is destroyed when the user completely terminates the app, images in the disk cache might be stuck in there forever.
Encapsulation
Another aspect I would consider, is making the caching system as transparent as possible to the programmer. If you want to enable/disable one of the cache it would be best to have most of the code remain the same.
In my app, I built a central content delivery system and I would always request images from the internet through this object. The caching system would then check the local caches (memory / disk) and either return me the image immediately or make a request to download it.
Either way... I, as the "user" of the caching system did not care what was happening behind the curtains. All I knew is I made a request to get an image from an URL and I got it (faster or slower depending if the image was cached).
I have a question on using the AssetLibrary with iOS. It it possible to store a pointer to an image in your app rather than the actual image? Let's say I want to create a playlist, but I don't want to store the actual image.
The reason I am asking, is that I find when I use the image picker, I can save an image to the devices documents directory, but once I get to 25 or so, it starts to slow down the device (iPad 1). I scale down the images if they are very large, I ran through the leaks instrument many times, and there are no leaks. I am just at a loss as to where to turn next, so I wanted to investigate alternatives. As I see nowhere where I can free up memory.
That's where I am at now, I'm curious if the AssetLibrary might be a option since I won't be storing physical images. I know it has some dis-advantages (requires users location, can be a bit slow when looping through images)
Any thoughts on this would be greatly appreciated.
Storing 25 images to the documents directory shouldn't slow down the device, unless you're trying to load all 25 extremely large images into memory at the same time.
You can't permanently store a pointer to the assets library asset, but you can store the URL you retrieve from the ALAssetRepresentation's url property and then use ALAssetsLibrary's assetForURL:resultBlock:failureBlock: to get back the corresponding ALAsset later. Do note that it is possible for the user to delete the asset from outside your program, even when your app is in the background, so if you are hanging on to an ALAsset you must listen for ALAssetsLibraryChangedNotification to know when to reload the assets.
I need to display thumbnails of images in a given directory. I use TFileStream to read the image file before loading the image into an image component. The bitmap is then resized to the thumbnail size, and assigned to a TImage component on a TScrollBox.
It seems to work ok, but slows down quite a lot with larger images.
Is there a faster way of loading (image) files from disk and resizing them?
Thanks, Pieter
Not really. What you can do is resize them in a background thread, and use a "place holder" image until the resizing is done. I would then save these resized images to some sort of cache file for later processing (windows does this, and calls the cache thumbs.db in the current directory).
You have several options on the thread architecture itself. A single thread that does all images, or a thread pool where a thread only knows how to process a single image. The AsyncCalls library is even another way and can keep things fairly simple.
I'll complement the answer by skamradt with an attempt to design this for being as fast as possible. For this you should
optimize I/O
use multiple threads to make use of multiple CPU cores, and to keep even a single CPU core working while you read (or write) files
The use of multiple threads implies that using VCL classes for the resizing isn't going to work, as the VCL isn't thread-safe, and all hacks around that don't scale well. efg's Computer Lab has links for image processing code.
It's important to not cause several concurrent I/O operations when using multiple threads. If you choose to write the thumbnail images back to files, then once you have started reading a file you should read it completely, and once you have started writing a file you should also write it completely. Interleaving both operations will kill your I/O, because you potentially cause a lot of seeking operations of the hard disc head.
For best results the reading (and writing) of files should also not happen in the main (GUI) thread of your application. That would suggest the following design:
Have one thread read files into TGraphic objects, and put these into a thread-safe list.
Have a thread pool wait on the list of files in original size, and have one thread process one TGraphic object, resize it into another TGraphic object, and add this to another thread-safe list.
Notify the GUI thread for each thumbnail image added to the list, so it can be displayed.
If thumbnails are to be written to file, do this in the reading thread as well (see above for an explanation).
Edit:
On re-reading your question I notice that you maybe only need to resize one image, in which case a single background thread is of course enough. I'll leave my answer in place anyway, maybe it will be of use to someone else some time. It's what I learned from one of my latest projects, where the final program could have needed a little more speed but was only using about 75% of the quad core machine at peak times. Decoupling I/O from processing would have made the difference.
I often use TJPEGImage with Scale:=jsEighth (in Delphi 7). This is really fast because the JPEG de-compression can skip a lot of the data to fill a bitmap of only an eighth of width and height.
Another option is to use the shell's method to extract a thumbnail, which is pretty fast as well
I'm in the vision business, and I simply upload the images to the GPU using OpenGL. (typically 20x 2048x2000x8bpp per second), a bmp per texture, and let the videocard scale (win32, Mike Lischke's opengl headers)
Upload of such an image costs 5-10ms depending on exact videocard (if not integrated and nvidia 7300 series or newer. Very recent integrated GPUs might be doable also). Scaling and displaying costs 300us. Which means customers can pan and zoom like crazy without touching the app. I draw an overlay (which used to be a tmetafile but is now an own format) on top of it.
My biggest picture is 4096x7000x8bpp which shows and scales in under 30ms. (GF 8600)
A limitation of this technology is max texture size. It can be resolved by fragmenting the picture into multiple textures, but I haven't bothered yet because I deliver the systems with the software.
(some typical sizes:
nv6x00 series: 2k*2k but uploading is just about break even compared to GDI
nv7x00 series: 4k*4k For me the baseline cards. GF7300's are like $20-40
nv8x00 series: 8k*8k
)
Note that this might not be for everybody. But if you are in the lucky situation to specify hardware limits, it might work. The main problem are laptops like Thinkpads, the GPUs of which are older than the avg laptop, which are in turn often a generation behind Desktops.
I chose OpenGL over DirectX because it is more static in time, and easier to find non-game related examples.
Try to look at the Graphics32 library : it's very good at drawing things and works great with Bitmaps. They are Thread - Safe with good example, and it's totally free.
Exploit windows capacity to create thumbnails. Remember that hidden Thumbs.db files in folders that contain images?
I have implemented something like this feature but in VB. My software is able to build thumbnails of 100 files (mixed size) in around 10 seconds.
I am not able to convert it to Delphi though.