Efficient way to resize photos in iOS - ios

What is the most efficient way to iterate over the entire camera roll, open every single photo and resize it?
My naive attempts to iterate over the asset library and get the defaultRepresentation results took about 1 second per 4 images (iPhone 5). Is there a way to do better?
I need the resized images to do some kind of processing.

Resizing full resolution photos is rather expensive operation. But you can use images already resized to screen resolution:
ALAsset *result = // .. do not forget to initialize it
ALAssetRepresentation *rawImage = [result defaultRepresentation];
UIImage *image = [UIImage imageWithCGImage:rawImage.fullScreenImage];
If you need another resolution you can still use 'fullScreenImage' since it has smaller size than original photo.
(CGImageRef)fullScreenImage
Returns a CGImage of the representation that is appropriate for
displaying full screen. The dimensions of the image are dependent on
the device your application is running on; the dimensions may not,
however, exactly match the dimensions of the screen.
In iOS 5 and later, this method returns a fully cropped, rotated, and
adjusted image—exactly as a user would see in Photos or in the image
picker.
Returns a CGImage of the representation that is appropriate for
displaying full screen, or NULL if a CGImage representation could not
be generated.

Related

Does ALAsset only provide thumbnail of size 150x90? Its there any way of getting larger thumbnail size, say 320x320 or 640x640?

I tried getting a thumbnail CGImage from ALAsset class, but it only returns 120x90 pixels images on iPhone4, which look very bad. Is there a way of getting larger thumbnails (say 320x320 or 640x640)?
I want to load thumbnails quickly because full size images are taking anywhere between 0.5 to 0.9 seconds to load on my iPhone4.
Is there a solution and sample code?
Use the fullScreenImage option from the ALAssetRepresentation.

Poor memory management performance for images on ios devices

I have the following issue:
I have a primary view object (that inherits from UIView) that displays a grid of 16 squares (each is a class I created that inherits from UIImageView), in a 4x4 layout.
Each of these 16 squares is 160x160, and contains an image (a different image for each square) that is no bigger than 30kb. The image, however, is 500x500 (because it is used elsewhere in the program, in its full size), so it gets resized in the "square" class to 160x160, by the setFrame method.
By looking at the memory management feature of Xcode when the app is running, I've noticed a few things:
each of these squares, when added to the primary view object, increase the memory usage of the app by 1MB. This doesn't happen at instantiation, but only when they are added by [self addSubview:square] at the primary view object.
if I use the same image for all the squares, the memory increase is
minimal. If I initialize the square objects without any images, then
the increase is basically zero.
the same app, when running in the simulator, uses 1/6 of the memory
it does on an actual device.
The whole point here is: why is each of the squares using up 1MB of memory when loading a 30kb image? Is there a way to reduce this? I've tried creating the images in a number of different ways: [UIImage imageNamed:img], [UIImage imageWithContentsFromFile:path], [UIImage imageWithData:imgData scale:scale], as well as not resizing the frame.
When you use a 500x500 image in a smaller UIImageView, it's still loading the larger image into memory. You can solve this by resizing the UIImage, itself (not just adjusting the frame of the UIImageView), making a 160x160 image, and use that image in your view. See this answer for some code to resize the image, which can then be invoked as follows:
UIImage *smallImage = [image scaleImageToSizeAspectFill:CGSizeMake(160, 160)];
You might even want to save the resized image, so you're not constantly encumbering yourself with the computational overhead of creating the smaller images every time, e.g.:
NSData *data = UIImagePNGRepresentation(smallImage);
[data writeToFile:path atomically:YES];
You can then load that PNG file corresponding to your small image in future invocations of the view.
In answer to your question why it takes up so much memory, it's because while the image is probably stored as a compressed JPG or PNG in persistent storage, I suspect in memory it's held as an uncompressed bitmap. There are many internal formats, but a common one is a 32-bit format with 8 bits each for red, green, blue, and alpha. Regardless of the specifics, you can quickly see how a 500 x 500 pixel representation, with 4 bytes per pixel could translate to a 1 mb of memory. But a 160 x 160 image should be roughly one tenth the size.

Nimbus NIToolbarPhotoViewController Image Crisping Effect

I'm using Nimbus to display a photo album with scrubber and zoomable image view. I use network images, and display a thumbnail until the final image is loaded. NIPhotoAlbumScrollView provides the method didLoadPhoto:atIndex:photoSize: to accomplish exactly that.
From the source code comments, the NIPhotoScrollView should support that "image crisping effect" - showing thumbnail and when full-size image is loaded, sharpen the image without loosing the zoom state.
This feature seems broken though. When the thumbnail is loaded, it is displayed in its 1:1 pixel size, which is very small on screen. When the full-size image is loaded, it is also loaded in its 1:1 pixel size (if smaller than the available view size), which makes the image visually jump bigger.
Any idea on how to fix that issue?
Note that I tried both with a full sized image with dimensions bigger or smaller than the size of the NIToolbarPhotoViewController on screen.
you may already be doing this, but one thing to make certain:
where you implement photoAlbumScrollView:photoAtIndex:photoSize:isLoading:originalPhotoDimensions: for protocol NIPhotoAlbumScrollViewDataSource, you must do the following, as mentioned in these comments in the source:
* If you have a thumbnail in memory but not the full-size image yet, then you should return
* the thumbnail, set isLoading to YES, and set photoSize to NIPhotoScrollViewPhotoSizeThumbnail.

How can I load portrait-oriented images from the photo roll with GLKTextureLoader at the proper orientation?

I'm using GLKTextureLoader to load images from the photo roll, but any time an image that was shot in portrait aspect ratio is loaded, it ends up being treated as a landscape aspect image, rotated 90 degrees off.
I believe the UIImage from the photo roll will need to be rotated and redrawn using Quartz, but I don't really know where to begin. I thought I might be able to modify the popular UIImage+Resize.h library, but when I try to load a UIImage that has been simply resized with that library, the GLKTextureLoader fails, suggesting to me that some information it relies on is lost in the process.

UIImage from UIImagePickerController orientation issue

I'm using UIImagePickerController to fetch images from the user's photo library and/or taken with the camera. Works great.
I'm noticing that fetched images are often (always?) coming back with their imageOrientation set to UIImageOrientationRight. But the image was captured with the device in portrait orientation. Why is this? This is an iPhone4S, iOS6, using the rear camera - so the resolution is 8MP.
In the simulator, grabbing photos from the photo library, images come back UIImageOrientationUp.
When I display the image in a UIImageView the orientation looks correct (portrait/up). But when I go to crop the image the coordinate system isn't what I would expect. 0,0 is in the upper-right of the image, which I guess makes sense when it reports UIImageOrientationRight.
I'm looking for an explanation of what's going on and the correct approach to dealing with the odd coordinate system.
EDIT: it sure appears to me that, on iPhone4S at least, the camera always takes UIImageOrientationRight/"landscape" images, and that UIImageView is respecting the imageOrientation on display. However, if I save the image using UIImagePNGRepresentation the orientation is not preserved (I think I read about this somewhere.)
It has to do with the orientation the phone was in when the image was taken. The phone doesn't rotate the image data from the camera sensor to make up in the image be up but instead sets the imageOrientation and then UIImage will take care of rendering things the right way.
When you try and crop, you typically change the image to be a CGImage and that loses the orientation information so suddenly you get the image with a strange orientation.
There are several categories on UIImage that you can get that will perform image cropping while taking imageOrientation into account.
Have a look at link or link

Resources