How To Properly Compress UIImages At Runtime - ios

I need to load 4 images for simultaneous editing. When I load them from the users library, the memory exceeds 500mb and crashes.
Here is a log from a raw allocations dump before I did any compression attempts:
Code:
var pickedImage = UIImage(data: imageData)
Instrument:
I have read several posts on compressing UIImages. I have tried reducing the UIImage:
New Code:
var pickedImage = UIImage(data: imageData, scale:0.1)
Instrument:
Reducing the scale of the UIImage had NO EFFECT?! Very odd.
So now I tried creating a JPEG compression based on the full UIImage
New code:
var pickedImage = UIImage(data: imageData)
var compressedData:NSData = UIImageJPEGRepresentation(pickedImage,0)
var compressedImage:UIImage = UIImage(data: compressedData)!//this is now used to display
Instrument:
Now, I suspect because I am converting the image its still being loaded. And since this is all occuring inside a callback from PHImageManager, I need a way to create a compressed UIImage from the NSData, but the setting the scale to 0.1 did NOTHING.
So any suggestions as to how I can compress this UIImage right from the NSData would be life saving!!!
Thanks

I ended up hard coding a size reduction before processing the image. Here is the code:
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize:CGSizeMake(CGFloat(asset.pixelWidth), CGFloat(asset.pixelHeight)), contentMode: .AspectFill, options: options)
{
result, info in
var minRatio:CGFloat = 1
//Reduce file size so take 1/2 UIScreen.mainScreen().bounds.width/2 || CGFloat(asset.pixelHeight) > UIScreen.mainScreen().bounds.height/2)
{
minRatio = min((UIScreen.mainScreen().bounds.width/2)/(CGFloat(asset.pixelWidth)), ((UIScreen.mainScreen().bounds.height/2)/CGFloat(asset.pixelHeight)))
}
var size:CGSize = CGSizeMake((CGFloat(asset.pixelWidth)*minRatio),(CGFloat(asset.pixelHeight)*minRatio))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
result.drawInRect(CGRectMake(0, 0, size.width, size.height))
var final = UIGraphicsGetImageFromCurrentImageContext()
var image = iImage(uiimage: final)
}

The reason you're having crashes and seeing such high memory usage is that you are missing the call to UIGraphicsEndImageContext(); -- so you are leaking memory like crazy.
For every call to UIGraphicsBeginImageContextWithOptions, make sure you have a call to UIGraphicsEndImageContext (after UIGraphicsGetImage*).
Also, you should wrap in #autorelease (I'm presuming you're using ARC), otherwise you'll still have out-of-memory crashes if you are rapidly processing images.
Do it like this:
#autorelease {
UIGraphicsBeginImageContextWithOptions(...);
..
something = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}

Related

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

If a filter is applied to a PNG where height > width, it rotates the image 90 degrees. How can I efficiently prevent this?

I'm making a simple filter app. I've found that if you load an image from the camera roll that is a PNG (PNGs have no orientation data flag) and the height is greater than the width, upon applying certain distortion filters to said image it will rotate and present it self as if it were a landscape image.
I found the below technique online somewhere in the many tabs i had open and it seems to do exactly what i want. It uses the original scale and orientation of the image when it was first loaded.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.origImage.imageOrientation)
but this is the warning i get when i try to use it:
Ambiguous use of 'init(CIImage:scale:orientation:)'
Here's the entire thing I'm trying to get working:
//global variables
var image: UIImage!
var origImage: UIImage!
func setFilter(action: UIAlertAction) {
origImage = image
// make sure we have a valid image before continuing!
guard let image = self.imageView.image?.cgImage else { return }
let openGLContext = EAGLContext(api: .openGLES3)
let context = CIContext(eaglContext: openGLContext!)
let ciImage = CIImage(cgImage: image)
let currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter?.setValue(ciImage, forKey: kCIInputImageKey)
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
//the line below is the one giving me errors which i thought would work.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.image.imageOrientation)
self.imageView.image = UIImage(cgImage: context.createCGImage(newImage, from: output.extent)!)}
The filters all work, they unfortunately turn images described above by 90 degrees for the reasons I suspect.
I've tried some other methods like using an extension that checks orientation of UIimages and converting the CIimage to a Uiimage, using the extension, then trying to convert it back to a Ciimage or just load the UIimage to the imageView for output. I ran into snag after snag with that process. I started to seem really convoluted just to get certain images to their default orientation as well.
Any advice would be greatly appreciated!
EDIT: heres where I got the method I was trying: When applying a filter to a UIImage the result is upside down
I found the answer. My biggest issue was the "Ambiguous use of 'init(CIImage:scale:orientation:)' "
it turned out that Xcode was auto populating the code as 'CIImage:scale:orientation' when it should have been ciImage:scale:orientation' The very vague error left a new dev like my scratching my head for 3 days over this. (This was true for CGImage and UIImage inits as well, but my original error was with CIImage so I used that to explain.)
with that knowledge I was able to formulate the code below for my new output:
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
let outputImage = UIImage(cgImage: context.createCGImage(output, from: output.extent)!)
let imageTurned = UIImage(cgImage: outputImage.cgImage!, scale: CGFloat(1.0), orientation: origImage.imageOrientation)
centerScrollViewContents()
self.imageView.image = imageTurned
}
This code replaces the if let output in the OP.

Compressing Large Assets From Dropbox

Currently I'm working on downloading all the image's provided within a user's selected folder. So this process consists of:
Requesting all the thumbnails of the images
Requesting all the original images
Take the original and create a retina compressed version to display
The reason we need to keep the original is because that's the file we will be printing on anything from 8x10 picture frames to 40x40 canvas wraps, so having the original is important. The only part that's causing the crash is taking the original and creating the compressed version. I ended up using this:
autoreleasepool({
self.compressed = self.saveImageWithReturn(image:self.original!.scaledImage(newSize: 2048), type: .Compressed)
})
scaling the image by calling:
func scaledImage(newSize newHeight : CGFloat) -> UIImage {
let scale = newHeight / size.height
let newWidth = size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
which saves the image to the device documents by using this:
private func saveImageWithReturn(image img: UIImage, type: PhotoType) -> UIImage? {
guard let path = ASSET_PATH.URLByAppendingPathComponent(type.rawValue).path,
let imageData = UIImageJPEGRepresentation(img, type.compression())
else { return nil }
imageData.writeToFile(path, atomically: true)
return UIImage(data: imageData)
}
The autoreleasepool actually fixes the problem of it crashing, but it's operating on the main thread basically freezing all user interaction. Then I tried
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INITIATED, 0), {
autoreleasepool({
self.compressed = self.saveImageWithReturn(image: self.original!.scaledImage(newSize: 2048), type: .Compressed)
})
})
and it results in it not properly releasing memory quick enough and it crashes. The reason I believe this is happening because it's not processing the scaledImage(newSize: 2048) quick enough causing the multiple requests to stack and all try to process this and having multiple instances all trying to hold onto an original image will result in memory warnings or a crash from it. So far I know it works perfectly on the iPad Air 2, but the iPad Generation 4 seems to process it slow.
Not sure if this is the best way of doing things, or if I should be finding another way to scale and compress the original file. Any help would be really appreciated.

Save image as is in photo album using swift

I've written a steganography application in Swift v2. The workflow is simple : I open an image, I type in a message to save, I perform bitmanip to modify the least significant bit and then I save to photo album.
Problem is, iOS is running compression (I believe) on my image and some of the bits change.
How can I save my image directly to the photo album without having iOS change any of my bits? (I can post the code here, but there is a lot of it)
(this is a small snippet of the overall code)
let imageRef = CGBitmapContextCreateImage(context);
let newImage = UIImage(CGImage: imageRef!)
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil)
It seems that I just needed to convert my newImage to be a UIImagePNGRepresenation.
let imageRef = CGBitmapContextCreateImage(context);
let newImage = UIImage(CGImage: imageRef!)
let newImagePNG = UIImagePNGRepresentation(newImage)
var saveableImage = UIImage(data: newImagePNG!)
saveImage(saveableImage!)

iOS 8 Load Images Fast From PHAsset

I have an app that lets people combine up to 4 pictures. However when I let them choose from their photos (up to 4) it can be very slow even when I set image quality to FastFormat. It will take 4 seconds (about 1 second per photo). On highest quality, 4 images takes 6 seconds.
Can you suggest anyway I get get the images out faster?
Here is the block where I process images.
func processImages()
{
_selectediImages = Array()
_cacheImageComplete = 0
for asset in _selectedAssets
{
var options:PHImageRequestOptions = PHImageRequestOptions()
options.synchronous = true
options.deliveryMode = PHImageRequestOptionsDeliveryMode.FastFormat
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize:CGSizeMake(CGFloat(asset.pixelWidth), CGFloat(asset.pixelHeight)), contentMode: .AspectFit, options: options)
{
result, info in
var minRatio:CGFloat = 1
//Reduce file size so take 1/3 the screen w&h
if(CGFloat(asset.pixelWidth) > UIScreen.mainScreen().bounds.width/2 || CGFloat(asset.pixelHeight) > UIScreen.mainScreen().bounds.height/2)
{
minRatio = min((UIScreen.mainScreen().bounds.width/2)/(CGFloat(asset.pixelWidth)), ((UIScreen.mainScreen().bounds.height/2)/CGFloat(asset.pixelHeight)))
}
var size:CGSize = CGSizeMake((CGFloat(asset.pixelWidth)*minRatio),(CGFloat(asset.pixelHeight)*minRatio))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
result.drawInRect(CGRectMake(0, 0, size.width, size.height))
var final = UIGraphicsGetImageFromCurrentImageContext()
var image = iImage(uiimage: final)
self._selectediImages.append(image)
self._cacheImageComplete!++
println(self._cacheImageComplete)
if(self._cacheImageComplete == self._selectionCount)
{
self._processingImages = false
self.selectionCallback(self._selectediImages)
}
}
}
}
Don't resize the images yourself — part of what PHImageManager is for is to do that for you. (It also caches the thumbnail images so that you can get them more quickly next time, and shares that cache across apps so that you don't end up with half a dozen apps creating half a dozen separate 500MB thumbnail caches of your whole library.)
func processImages() {
_selectediImages = Array()
_cacheImageComplete = 0
for asset in _selectedAssets {
let options = PHImageRequestOptions()
options.deliveryMode = .FastFormat
// request images no bigger than 1/3 the screen width
let maxDimension = UIScreen.mainScreen().bounds.width / 3 * UIScreen.mainScreen().scale
let size = CGSize(width: maxDimension, height: maxDimension)
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize: size, contentMode: .AspectFill, options: options)
{ result, info in
// probably some of this code is unnecessary, too,
// but I'm not sure what you're doing here so leaving it alone
self._selectediImages.append(result)
self._cacheImageComplete!++
println(self._cacheImageComplete)
if self._cacheImageComplete == self._selectionCount {
self._processingImages = false
self.selectionCallback(self._selectediImages)
}
}
}
}
}
Notable changes:
Don't ask for images synchronously on the main thread. Just don't.
Pass a square maximum size to requestImageForAsset and use the AspectFill mode. This will get you an image that crops to fill that square no matter what its aspect ratio is.
You're asking for images by their pixel size here, and the screen size is in points. Multiply by the screen scale or your images will be pixelated. (Then again, you're asking for FastFormat, so you might get blurry images anyway.)
Why did you say synchronous? Obviously that's going to slow things way down. Moreover, saying synchronous on the main thread is absolutely forbidden!!!! Read the docs and obey them. That is the primary issue here.
There are then many other considerations. Basically you're using this call all wrong. Once you've removed the synchronous, do not process the image like that! Remember, this callback is going to be called many times as the image is provided in better and better versions. You must not do anything time-consuming here.
(Also, why are you resizing the image? If you wanted the image at a certain size, you should have asked for that size when you requested it. Let the image-fetcher do the work for you.)

Resources