iOS 8 Load Images Fast From PHAsset - ios

I have an app that lets people combine up to 4 pictures. However when I let them choose from their photos (up to 4) it can be very slow even when I set image quality to FastFormat. It will take 4 seconds (about 1 second per photo). On highest quality, 4 images takes 6 seconds.
Can you suggest anyway I get get the images out faster?
Here is the block where I process images.
func processImages()
{
_selectediImages = Array()
_cacheImageComplete = 0
for asset in _selectedAssets
{
var options:PHImageRequestOptions = PHImageRequestOptions()
options.synchronous = true
options.deliveryMode = PHImageRequestOptionsDeliveryMode.FastFormat
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize:CGSizeMake(CGFloat(asset.pixelWidth), CGFloat(asset.pixelHeight)), contentMode: .AspectFit, options: options)
{
result, info in
var minRatio:CGFloat = 1
//Reduce file size so take 1/3 the screen w&h
if(CGFloat(asset.pixelWidth) > UIScreen.mainScreen().bounds.width/2 || CGFloat(asset.pixelHeight) > UIScreen.mainScreen().bounds.height/2)
{
minRatio = min((UIScreen.mainScreen().bounds.width/2)/(CGFloat(asset.pixelWidth)), ((UIScreen.mainScreen().bounds.height/2)/CGFloat(asset.pixelHeight)))
}
var size:CGSize = CGSizeMake((CGFloat(asset.pixelWidth)*minRatio),(CGFloat(asset.pixelHeight)*minRatio))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
result.drawInRect(CGRectMake(0, 0, size.width, size.height))
var final = UIGraphicsGetImageFromCurrentImageContext()
var image = iImage(uiimage: final)
self._selectediImages.append(image)
self._cacheImageComplete!++
println(self._cacheImageComplete)
if(self._cacheImageComplete == self._selectionCount)
{
self._processingImages = false
self.selectionCallback(self._selectediImages)
}
}
}
}

Don't resize the images yourself — part of what PHImageManager is for is to do that for you. (It also caches the thumbnail images so that you can get them more quickly next time, and shares that cache across apps so that you don't end up with half a dozen apps creating half a dozen separate 500MB thumbnail caches of your whole library.)
func processImages() {
_selectediImages = Array()
_cacheImageComplete = 0
for asset in _selectedAssets {
let options = PHImageRequestOptions()
options.deliveryMode = .FastFormat
// request images no bigger than 1/3 the screen width
let maxDimension = UIScreen.mainScreen().bounds.width / 3 * UIScreen.mainScreen().scale
let size = CGSize(width: maxDimension, height: maxDimension)
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize: size, contentMode: .AspectFill, options: options)
{ result, info in
// probably some of this code is unnecessary, too,
// but I'm not sure what you're doing here so leaving it alone
self._selectediImages.append(result)
self._cacheImageComplete!++
println(self._cacheImageComplete)
if self._cacheImageComplete == self._selectionCount {
self._processingImages = false
self.selectionCallback(self._selectediImages)
}
}
}
}
}
Notable changes:
Don't ask for images synchronously on the main thread. Just don't.
Pass a square maximum size to requestImageForAsset and use the AspectFill mode. This will get you an image that crops to fill that square no matter what its aspect ratio is.
You're asking for images by their pixel size here, and the screen size is in points. Multiply by the screen scale or your images will be pixelated. (Then again, you're asking for FastFormat, so you might get blurry images anyway.)

Why did you say synchronous? Obviously that's going to slow things way down. Moreover, saying synchronous on the main thread is absolutely forbidden!!!! Read the docs and obey them. That is the primary issue here.
There are then many other considerations. Basically you're using this call all wrong. Once you've removed the synchronous, do not process the image like that! Remember, this callback is going to be called many times as the image is provided in better and better versions. You must not do anything time-consuming here.
(Also, why are you resizing the image? If you wanted the image at a certain size, you should have asked for that size when you requested it. Let the image-fetcher do the work for you.)

Related

Release memory after UIGraphicsImageRenderer work

I have many images saved in a Assets.xcassets. I'm resizing them, making them smaller. Then later I don't need these images anymore. Images were not shown, just were prepared. So the class that got images, resized them and kept is now successfully deinited.
But the memory after UIGraphicsImageRenderer resized the images not released. Memory usage stayed on the same level. Even if I didn't use the images at all like in the example code below.
I think something is wrong. Actually, I resized the images to use less memory but it contrary - resized images use more memory and it do not releasing after the class owner has been deinited.
How to release the memory?
Apples documentation says: "...An image renderer keeps a cache of Core Graphics contexts, so reusing the same renderer can be more efficient than creating new renderers." - but I don't need it! How to switch it off?
With 28 images it seems not big deal. But I have about 100-300 images that should be resized, cropped and other actions with UIGraphicsImageRenderer that at the end of the day uses about 800-900 Mb of the memory that just cache of some render's job that already done.
You can take the code below and try.
class ExampleClass {
func start() {
Worker().doWork()
}
}
class Worker {
deinit {
print("deinit \(Self.self)")
}
func doWork() {
var urls: [String] = []
_ = (1...28).map({ urls.append("pathToTheImages/\($0)") })
// images with resolution 1024.0 x 1366.0 pixels
for url in urls {
let img = UIImage(named: url)! // Memory usage: 11.7 MB
//let cropped = resizedImage(original: img, to: UIScreen.main.bounds.size)
//With this line above - Memory usage: 17.5 MB even after this class has been deinited
}
}
// from 2048 × 3285 pixels >>> to >>> 768.0 x 1024.0 pixels --- for iPad Pro (9.7-inch)
func resizedImage(original: UIImage, to size: CGSize) -> UIImage {
let result = autoreleasepool { () -> UIImage in
let renderer = UIGraphicsImageRenderer(size: size)
let result = renderer.image { (context) in
original.draw(in: CGRect(origin: .zero, size: size))
}
return result
}
return UIImage(cgImage: result.cgImage!, scale: original.scale, orientation: original.imageOrientation)
}
}
Asset catalogs are not intended for the use to which you are putting them. The purpose of an image in the asset catalog is to display it, directly. If you have a lot of images that you want to load and resize and save elsewhere without displaying, you need to keep them in your app bundle at the top level, so that you can call init(contentsOfFile:) which does not cache the image.

Compressing Large Assets From Dropbox

Currently I'm working on downloading all the image's provided within a user's selected folder. So this process consists of:
Requesting all the thumbnails of the images
Requesting all the original images
Take the original and create a retina compressed version to display
The reason we need to keep the original is because that's the file we will be printing on anything from 8x10 picture frames to 40x40 canvas wraps, so having the original is important. The only part that's causing the crash is taking the original and creating the compressed version. I ended up using this:
autoreleasepool({
self.compressed = self.saveImageWithReturn(image:self.original!.scaledImage(newSize: 2048), type: .Compressed)
})
scaling the image by calling:
func scaledImage(newSize newHeight : CGFloat) -> UIImage {
let scale = newHeight / size.height
let newWidth = size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
which saves the image to the device documents by using this:
private func saveImageWithReturn(image img: UIImage, type: PhotoType) -> UIImage? {
guard let path = ASSET_PATH.URLByAppendingPathComponent(type.rawValue).path,
let imageData = UIImageJPEGRepresentation(img, type.compression())
else { return nil }
imageData.writeToFile(path, atomically: true)
return UIImage(data: imageData)
}
The autoreleasepool actually fixes the problem of it crashing, but it's operating on the main thread basically freezing all user interaction. Then I tried
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INITIATED, 0), {
autoreleasepool({
self.compressed = self.saveImageWithReturn(image: self.original!.scaledImage(newSize: 2048), type: .Compressed)
})
})
and it results in it not properly releasing memory quick enough and it crashes. The reason I believe this is happening because it's not processing the scaledImage(newSize: 2048) quick enough causing the multiple requests to stack and all try to process this and having multiple instances all trying to hold onto an original image will result in memory warnings or a crash from it. So far I know it works perfectly on the iPad Air 2, but the iPad Generation 4 seems to process it slow.
Not sure if this is the best way of doing things, or if I should be finding another way to scale and compress the original file. Any help would be really appreciated.

Large Image Compositing on iOS in Swift

Although I understand the theory behind image compositing, I haven't dealt much with hardware acceleration and I'm running into implementation issues on iOS (9.2, iPhone 6S). My project is to sequentially composite a large number (20, all the way to hundreds) of large images (12 megapixel) on top of each other at decreasing opacities, and I'm looking for advice as to the best framework or technique. I know there must be a good, hardware accelerated, destructive compositing tool capable of handling large files on iOS, because I can perform this task in Safari in an HTML Canvas tag, and load this page in Safari on the iPhone at nearly the same blazing speed.
This can be a destructive compositing task, like painting in Canvas, so I shouldn't have memory issues as the phone will only have to store the current result up to that point. Ideally, I'd like floating point pixel components, and I'd also like to be able to see the progress on screen.
Core Image has filters that seem great, but they are intended to operate losslessly on one or two pictures and return one result. I can feed that result into the filter again with the next image, and so on, but since the filter doesn't render immediately, this chaining of filters runs me out of memory after about 60 images. Rendering to a Core Graphics image object and reading back in as a Core Image object after each filter doesn't help either, as that overloads the memory even faster.
Looking at the documentation, there are a number of other ways for iOS to leverage the GPU - CALayers being a prime example. But I'm unclear if that handles pictures larger than the screen, or is only intended for framebuffers the size of the screen.
For this task - to leverage the GPU to store a destructively composited "stack" of 12 megapixel photos, and add an additional one on top at a specified opacity, repeatedly, while outputing the current contents of the stack scaled down to the screen - what is the best approach? Can I use an established framework/technique, or am I better of diving into OpenGL and Metal myself? I know the iPhone has this capability, I just need to figure out how to leverage it.
This is what I've got so far. Profiler tells me the rendering takes about 350ms, but I run out of memory if I increase to 20 pics. If I don't render after each loop, I can increase to about 60 pics before I run of out memory.
var stackBuffer: CIImage!
var stackRender: CGImage!
var uiImage: UIImage!
let glContext = EAGLContext(API: .OpenGLES3)
let context = CIContext(EAGLContext: glContext)
// Preload list of 10 test pics
var ciImageArray = Array(count: 10, repeatedValue: CIImage.emptyImage())
for i in 0...9 {
uiImage = UIImage(named: String(i) + ".jpg")!
ciImageArray[i] = CIImage(image: uiImage)!
}
// Put the first image in the buffer
stackBuffer = ciImageArray[0]
for i in 1...9 {
// The next image will have an opacity of 1/n
let topImage = ciImageArray[i]
let alphaTop = topImage.imageByApplyingFilter(
"CIColorMatrix", withInputParameters: [
"inputAVector" : CIVector(x:0, y:0, z:0, w:1/CGFloat(i + 1))
])
// Layer the next image on top of the stack
let filter = CIFilter(name: "CISourceOverCompositing")!
filter.setValue(alphaTop, forKey: kCIInputImageKey)
filter.setValue(stackBuffer, forKey: kCIInputBackgroundImageKey)
// Render the result, and read back in
stackRender = context.createCGImage(filter.outputImage!, fromRect: stackBuffer.extent)
stackBuffer = CIImage(CGImage: stackRender)
}
// Output result
uiImage = UIImage(CGImage: stackRender)
compositeView.image = uiImage

How To Properly Compress UIImages At Runtime

I need to load 4 images for simultaneous editing. When I load them from the users library, the memory exceeds 500mb and crashes.
Here is a log from a raw allocations dump before I did any compression attempts:
Code:
var pickedImage = UIImage(data: imageData)
Instrument:
I have read several posts on compressing UIImages. I have tried reducing the UIImage:
New Code:
var pickedImage = UIImage(data: imageData, scale:0.1)
Instrument:
Reducing the scale of the UIImage had NO EFFECT?! Very odd.
So now I tried creating a JPEG compression based on the full UIImage
New code:
var pickedImage = UIImage(data: imageData)
var compressedData:NSData = UIImageJPEGRepresentation(pickedImage,0)
var compressedImage:UIImage = UIImage(data: compressedData)!//this is now used to display
Instrument:
Now, I suspect because I am converting the image its still being loaded. And since this is all occuring inside a callback from PHImageManager, I need a way to create a compressed UIImage from the NSData, but the setting the scale to 0.1 did NOTHING.
So any suggestions as to how I can compress this UIImage right from the NSData would be life saving!!!
Thanks
I ended up hard coding a size reduction before processing the image. Here is the code:
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize:CGSizeMake(CGFloat(asset.pixelWidth), CGFloat(asset.pixelHeight)), contentMode: .AspectFill, options: options)
{
result, info in
var minRatio:CGFloat = 1
//Reduce file size so take 1/2 UIScreen.mainScreen().bounds.width/2 || CGFloat(asset.pixelHeight) > UIScreen.mainScreen().bounds.height/2)
{
minRatio = min((UIScreen.mainScreen().bounds.width/2)/(CGFloat(asset.pixelWidth)), ((UIScreen.mainScreen().bounds.height/2)/CGFloat(asset.pixelHeight)))
}
var size:CGSize = CGSizeMake((CGFloat(asset.pixelWidth)*minRatio),(CGFloat(asset.pixelHeight)*minRatio))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
result.drawInRect(CGRectMake(0, 0, size.width, size.height))
var final = UIGraphicsGetImageFromCurrentImageContext()
var image = iImage(uiimage: final)
}
The reason you're having crashes and seeing such high memory usage is that you are missing the call to UIGraphicsEndImageContext(); -- so you are leaking memory like crazy.
For every call to UIGraphicsBeginImageContextWithOptions, make sure you have a call to UIGraphicsEndImageContext (after UIGraphicsGetImage*).
Also, you should wrap in #autorelease (I'm presuming you're using ARC), otherwise you'll still have out-of-memory crashes if you are rapidly processing images.
Do it like this:
#autorelease {
UIGraphicsBeginImageContextWithOptions(...);
..
something = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}

UIImage size returned from 'requestImageForAsset' is not even close to the 'targetSize' setting

I've thoroughly read through the latest iOS8 photo frame work and I am trying to fetch some assets from the users library to display. I let the user edit 4 images at once. But because of this, I need to compress the images otherwise the app will crash.
I am using a PHImageManager to load the images via the following code:
func processImages()
{
println("Processing")
_selectediImages = Array()
_cacheImageComplete = 0
for asset in _selectedAssets
{
var options:PHImageRequestOptions = PHImageRequestOptions()
options.version = PHImageRequestOptionsVersion.Unadjusted
options.synchronous = true
var minRatio:CGFloat = 1
if(CGFloat(asset.pixelWidth) > UIScreen.mainScreen().bounds.width || CGFloat(asset.pixelHeight) > UIScreen.mainScreen().bounds.height)
{
minRatio = min(UIScreen.mainScreen().bounds.width/(CGFloat(asset.pixelWidth)), (UIScreen.mainScreen().bounds.height/CGFloat(asset.pixelHeight)))
}
var size:CGSize = CGSizeMake((CGFloat(asset.pixelWidth)*minRatio),(CGFloat(asset.pixelHeight)*minRatio))
println("Target size is \(size)")
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize:size, contentMode: .AspectFill, options: options)
{
uiimageResult, info in
var image = iImage(uiimage: uiimageResult)
println("Result Size Is \(uiimageResult.size)")
}
}
}
As you can see, I am calculating the target size to make sure the image is at least no bigger than the screen. If it is, I do a ratio scale down on the image. However here is a typical print log
Target size is (768.0,798.453531598513)
Result Size Is (1614.0,1678.0)
Even though I am setting the target size to 768x798 (in that specific case) the resulting UIImage it's giving me is more than double that. Now according to the documentation, the targetSize parameter
"The target size of image to be returned."
Not the clearest explanation but from my experiments it is NOT matching this.
If you have some suggestions I'd love to hear it!
In Swift, you want to do something like this:
var asset: PHAsset!
var imageSize = CGSize(width: 100, height: 100)
var options = PHImageRequestOptions()
options.resizeMode = PHImageRequestOptionsResizeMode.Exact
options.deliveryMode = PHImageRequestOptionsDeliveryMode.Opportunistic
PHImageManager.defaultManager().requestImage(asset, targetSize: imageSize, contentMode: PHImageContentMode.AspectFill, options: options) {
(image, info) -> Void in
// what you want to do with the image here
print("Result Size Is \(image.size)")
}
In Objective-C, it looks something like this:
void (^resultHandler)(UIImage *, NSDictionary *) = ^(UIImage *result, NSDictionary *info) {
// what you want to do with the image
};
CGSize cellSize = CGSizeMake(100, 100);
PHImageRequestOptions *options = [[PHImageRequestOptions alloc] init];
options.resizeMode = PHImageRequestOptionsResizeModeExact;
options.deliveryMode = PHImageRequestOptionsDeliveryModeOpportunistic;
[[PHImageManager defaultManager] requestImageForAsset:self.imageAsset targetSize:cellSize contentMode:PHImageContentModeAspectFill options:options resultHandler:resultHandler];
Important Note: With the Opportunistic delivery mode, the result block may be called more than once with different sizes, but the last call will be the size you want. It's better to use Opportunistic so that the UI will load a low-quality placeholder first and then update it as the OS can generate a better image (rather than having a blank square).
This is because requestImageForAsset will be called twice.
The first time, it will return a very same size image, such as (60 * 45) which I think is the thumbnail of that image.
The second time, you will get the full size image.
I use
if ([[info valueForKey:#"PHImageResultIsDegradedKey"]integerValue]==0){
// Do something with the FULL SIZED image
} else {
// Do something with the regraded image
}
to distinguish the two different images.
Reference: iOS 8 PhotoKit. Get maximum-size image from iCloud Photo Sharing albums
Try to set the resizeMode to PHImageRequestOptionsResizeModeExact and deliveryMode to PHImageRequestOptionsDeliveryModeHighQualityFormat;
PHImageRequestOptionsResizeModeExact,
// same as above but also guarantees the delivered image is exactly targetSize (must be set when a normalizedCropRect is specified)
You can use PHImageManagerMaximumSize as the targetSize for a PHImageManager request.
According to the docs in PHImageManagerMaximumSize:
Size to pass when requesting the original image or the largest rendered image available (resizeMode will be ignored)

Resources