To get around an iOS limitation with static libraries I'm embedding some binary resources (a few small images, primarily) in a static library as byte arrays.
Functionally - this works well.
My question is - what are the drawbacks of such an approach?
Specifically, if someone were to go overboard with this and embed tons of large resources in the binary - would this cause any problems?
Because I'm not 100% sure about how iOS loads binaries/etc I'm not sure if this data is all loaded into memory at the point the app is loaded, or is it kept in the DATA section and loaded from disk on demand?
It depends on how you're generating the byte array. Is it PNG/JPEg data or raw pixels? The latter will be much larger in terms of storage space and memory than if you are storing JPEG/PNG data.
And the data in the byte array is always going to be in memory, and it will probably need to be copied again to actually load the image so you're using twice the memory as if you'd loaded it from a file.
Also you are missing out on some of the built-in behaviours that iOS has for managing images. For example if you load an image using [UIImage imagedNamed:#"foo.png"] the image is cached so it's quicker to load next time, and loading multiple copies doesn't result in duplicate memory usage, and the cache is automatically cleared if the memory runs low. If you load the image from data, you're missing out on those features.
The conventional approach is to supply a resources bundle along with your library and then load assets from the resources bundle using the NSBundle methods (you can load additional bundles and then use the pathForFile:... methods just like you do with the mainBundle).
Related
I'm struggling with memory management in iOS while downloading relatively large files from the web (such as videos with 350MB size).
The goal here is to download these kind of files and store it on CoreData on a Binary Data field.
At the moment I'm using NSURLSession.dataTaskWithUrl and NSURLSession.dataTaskWithRequest methods to retrieve these files, but it looks like these methods don't treat problems such as memory usage, they just keep on filling the memory until it reaches its maximum usage, leaving me with a memory warning when I reach 380MB~.
Initial Memory Usage
Memory Warning
What's the best strategy to perform this kind of large data retrieval from the web without reaching a memory warning? Does AlamoFire and other libs can deal with this problem?
It is better to use download task.
And save the video as a file to Document or Library directory.
Then save the relative path to CoreData
If you use download task
You can resume if last download fail
Need less memory
You can try AFNetworking to download large files.
After reading the question on how to build a complex web_ui application, I'm wondering if it will be performant enough.
I'm building a fairly large web application and am wondering if I should split up the main parts or if I can safely serve everything on one page (assuming that I don't mind that the user has to download a pretty big junk of .dart code).
It is usually considered best practice to split up your code once it reaches a certain size (that size depends on your target audience, your servers etc.).
Dart2js supports lazy loading of libraries so that you load an initial app chunk on page load and then load separate chunks through AJAX requests as they are needed. This will save you bandwidth and speed up page load time.
You can start with serving a single file, but if you expect that will not be performant enough, I would build lazy loading into the app from the start.
Note: at this time there are limitations on how many files can be lazy loaded: https://code.google.com/p/dart/issues/detail?id=3940
I have a method in my app that builds UIImages with specific colors. Since most likely the same colored image will be created multiple times, I would like to cache that UIImage, then use the cached version rather than building a new one if that specific color is needed.
This is NOT caching of remote images from the web, these are locally created images.
What is the best method to do this? From disk or just save the UIImage objects into an NSDictionary? What about NSCache?
** I would prefer not to have to use library for this. Looking for a simple solution.
It depends how many images you have and how frequently and concurrently each is used.
If you have a set of images which are all used frequently then NSDictionary is a good choice as it will keep all the images in memory. If you do get a memory warning you can always remove all of the images and then regenerate them when required.
As you're generating the images in code it seems like caching to disk won't be so useful, but that depends on how complex the images are. Again NSDictionary can be used for an in memory cache, then fail out to disk if nothing in the dict, then recreate if all else fails.
The NSCache route offers you some multi-threading benefits (if you'd use them) but is generally similar to the NSDictionary route. You have a little less control as the memory management is handled for you so it's possible that the cache could decide to destroy some of your images more frequently than you might if you manage it explicitly.
In any case you only need a handful of lines on top of your current generation code.
is there a way to load (large) PDF files only partly? So, let's say: Don't load the complete PDF file, but only the first 5 pages.
Because I'm actually handling large PDF files (30 - 50 MB) and when I call CGPDFRetain the whole document, so the complete 30-50 MB are retained in memory.
Can somebody help me with that? Is it possible to fetch single pages out of PDF without first loading the complete PDF into memory?
Can somebody help me with that problem?
Update:
Due to the fact, that my app needs to support offline access, the PDFs should be loaded from local storage.
Update 2: I tried different strategies by now, but the app is still on memory edge, because I'm loading my PDF completely into the memory in one single step. But somehow it should be possible to support big PDF files, shouldn't it?
I don't know what CGPDFRetain is, so I might be totally off. PDF is designed in such a way that you only need parts of it to render it correctly. There is something called a "web optimized" PDF which has its objects arranged in a special way. Every webserver is able to send a byte range of a document, and these two mechanisms allow the partial loading of a PDF.
You should elaborate where you load the PDF.
It's not like this. CGPDFDocument POINTS to your a disk space and has parts cached in memory, but never the whole document.
There are some problems with CGPDFDocument getting too greedy with memory, but in that case just destroy and re-create the CGPDFDocument and you're fine. Otherwise, your app might just crash after CGPDFDocument has allocated too much memory.
I was wondering, which way of managing thumbnail images make less impact to web server performance.
This is the scenario:
1) each order can have maximum of 10 images.
2) images does not need to store after order has completed (max period is 2 weeks).
3) potentially, there may have a few thousands of active orders at anytime.
4) orders with images will frequently visit by customers.
IMO, pre-generate thumbnail in hard disk is a better solution as hard disk are cheaper even with RAID.
But what about disk I/O speed, and resource it need to load images? will it take more resource than generate thumbnails at real-time?
It would be most appreciate if you could share your opinion.
I suggest a combination of both - dynamic generation with disk caching. This prevents wasted space from unused images, yet adds absolutely no overhead for repeatedly requested images. SQL and mem caching are not good choices, both require too much RAM. IIS can serve large images from disk while only using 100k of RAM.
While creating http://imageresizing.net, I discovered 29 image resizing pitfalls, and few of them are obvious. I strongly suggest reading the list, even if it's a bit boring. You'll need an HttpModule to be able to pass cached requests off to IIS.
Although - why re-invent the wheel? The ImageResizer library is widely used and well tested.
If the orders are visited frequently by customers, it is better to create the thumbnails ones and store on disk. this way the webserver doesn't need to process the page that long. It will speed up the loading time of your webpages.
It depends on your load. If the resource is being requested multiple times then it makes sense to cache it.
Will there always have to be an image? If not, you can create it on the first request and then cache it either in memory, or more likely a database, for subsequent requests.
However, if you always need the n images to exists per order, and/or you have multiple orders being created regularly, you will be better off passing the thumbnail creation off to a worker thread or some kind of asynchronous page. That way, multiple request's can be stacked up, reducing load on the server.