Compressing Large Assets From Dropbox - ios

Currently I'm working on downloading all the image's provided within a user's selected folder. So this process consists of:
Requesting all the thumbnails of the images
Requesting all the original images
Take the original and create a retina compressed version to display
The reason we need to keep the original is because that's the file we will be printing on anything from 8x10 picture frames to 40x40 canvas wraps, so having the original is important. The only part that's causing the crash is taking the original and creating the compressed version. I ended up using this:
autoreleasepool({
self.compressed = self.saveImageWithReturn(image:self.original!.scaledImage(newSize: 2048), type: .Compressed)
})
scaling the image by calling:
func scaledImage(newSize newHeight : CGFloat) -> UIImage {
let scale = newHeight / size.height
let newWidth = size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
which saves the image to the device documents by using this:
private func saveImageWithReturn(image img: UIImage, type: PhotoType) -> UIImage? {
guard let path = ASSET_PATH.URLByAppendingPathComponent(type.rawValue).path,
let imageData = UIImageJPEGRepresentation(img, type.compression())
else { return nil }
imageData.writeToFile(path, atomically: true)
return UIImage(data: imageData)
}
The autoreleasepool actually fixes the problem of it crashing, but it's operating on the main thread basically freezing all user interaction. Then I tried
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INITIATED, 0), {
autoreleasepool({
self.compressed = self.saveImageWithReturn(image: self.original!.scaledImage(newSize: 2048), type: .Compressed)
})
})
and it results in it not properly releasing memory quick enough and it crashes. The reason I believe this is happening because it's not processing the scaledImage(newSize: 2048) quick enough causing the multiple requests to stack and all try to process this and having multiple instances all trying to hold onto an original image will result in memory warnings or a crash from it. So far I know it works perfectly on the iPad Air 2, but the iPad Generation 4 seems to process it slow.
Not sure if this is the best way of doing things, or if I should be finding another way to scale and compress the original file. Any help would be really appreciated.

Related

Swift UIImage .jpegData() and .pngData() changes image size

I am using Swift's Vision Framework for Deep Learning and want to upload the input image to backend using REST API - for which I am converting my UIImage to MultipartFormData using jpegData() and pngData() function that swift natively offers.
I use session.sessionPreset = .vga640x480 to specify the image size in my app for processing.
I was seeing different size of image in backend - which I was able to confirm in the app because UIImage(imageData) converted from the image is of different size.
This is how I convert image to multipartData -
let multipartData = MultipartFormData()
if let imageData = self.image?.jpegData(compressionQuality: 1.0) {
multipartData.append(imageData, withName: "image", fileName: "image.jpeg", mimeType: "image/jpeg")
}
This is what I see in Xcode debugger -
The following looks intuitive, but manifests the behavior you describe, whereby one ends up with a Data representation of the image with an incorrect scale and pixel size:
let ciImage = CIImage(cvImageBuffer: pixelBuffer) // 640×480
let image = UIImage(ciImage: ciImage) // says it is 640×480 with scale of 1
guard let data = image.pngData() else { ... } // but if you extract `Data` and then recreate image from that, the size will be off by a multiple of your device’s scale
However, if you create it via a CGImage, you will get the right result:
let ciImage = CIImage(cvImageBuffer: pixelBuffer)
let ciContext = CIContext()
guard let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent) else { return }
let image = UIImage(cgImage: cgImage)
You asked:
If my image is 640×480 points with scale 2, does my deep learning model would still take the same to process as for a 1280×960 points with scale 1?
There is no difference, as far as the model goes, between 640×480pt # 2× versus 1280×960pt # 1×.
The question is whether 640×480pt # 2× is better than 640×480pt # 1×: In this case, the model will undoubtedly generate better results, though possibly slower, with higher resolution images (though at 2×, the asset is roughly four times larger/slower to upload; on 3× device, it will be roughly nine times larger).
But if you look at the larger asset generated by the direct CIImage » UIImage process, you can see that it did not really capture a 1280×960 snapshot, but rather captured 640×480 and upscaled (with some smoothing), so you really do not have a more detailed asset to deal with and is unlikely to generate better results. So, you will pay the penalty of the larger asset, but likely without any benefits.
If you need better results with larger images, I would change the preset to a higher resolution but still avoid the scale based adjustment by using the CIContext/CGImage-based snippet shared above.

Release memory after UIGraphicsImageRenderer work

I have many images saved in a Assets.xcassets. I'm resizing them, making them smaller. Then later I don't need these images anymore. Images were not shown, just were prepared. So the class that got images, resized them and kept is now successfully deinited.
But the memory after UIGraphicsImageRenderer resized the images not released. Memory usage stayed on the same level. Even if I didn't use the images at all like in the example code below.
I think something is wrong. Actually, I resized the images to use less memory but it contrary - resized images use more memory and it do not releasing after the class owner has been deinited.
How to release the memory?
Apples documentation says: "...An image renderer keeps a cache of Core Graphics contexts, so reusing the same renderer can be more efficient than creating new renderers." - but I don't need it! How to switch it off?
With 28 images it seems not big deal. But I have about 100-300 images that should be resized, cropped and other actions with UIGraphicsImageRenderer that at the end of the day uses about 800-900 Mb of the memory that just cache of some render's job that already done.
You can take the code below and try.
class ExampleClass {
func start() {
Worker().doWork()
}
}
class Worker {
deinit {
print("deinit \(Self.self)")
}
func doWork() {
var urls: [String] = []
_ = (1...28).map({ urls.append("pathToTheImages/\($0)") })
// images with resolution 1024.0 x 1366.0 pixels
for url in urls {
let img = UIImage(named: url)! // Memory usage: 11.7 MB
//let cropped = resizedImage(original: img, to: UIScreen.main.bounds.size)
//With this line above - Memory usage: 17.5 MB even after this class has been deinited
}
}
// from 2048 × 3285 pixels >>> to >>> 768.0 x 1024.0 pixels --- for iPad Pro (9.7-inch)
func resizedImage(original: UIImage, to size: CGSize) -> UIImage {
let result = autoreleasepool { () -> UIImage in
let renderer = UIGraphicsImageRenderer(size: size)
let result = renderer.image { (context) in
original.draw(in: CGRect(origin: .zero, size: size))
}
return result
}
return UIImage(cgImage: result.cgImage!, scale: original.scale, orientation: original.imageOrientation)
}
}
Asset catalogs are not intended for the use to which you are putting them. The purpose of an image in the asset catalog is to display it, directly. If you have a lot of images that you want to load and resize and save elsewhere without displaying, you need to keep them in your app bundle at the top level, so that you can call init(contentsOfFile:) which does not cache the image.

Why app Terminates due to memory waring when convert pdf page to high quality image in ios, swift in real device

I'm trying to get the image(high quality) of each pdf page. I'm using below code running through a for loop until page count and it works.
guard let document = CGPDFDocument(pdfurl as CFURL) else { return }
guard let page = document.page(at: i) else { return }
let dpi: CGFloat = 300.0/72.0
let pagerect = page.getBoxRect(.mediaBox)
print(pagebounds)
print(pagerect)
let render = UIGraphicsImageRenderer(size: CGSize(width: pagerect.size.width * dpi, height: pagerect.size.height * dpi))
let imagedata = render.jpegData(withCompressionQuality: 0.5, actions: { cnv in
UIColor.white.set()
cnv.fill(pagerect)
cnv.cgContext.translateBy(x: 0.0, y: pagerect.size.height * dpi)
cnv.cgContext.scaleBy(x: dpi, y: -dpi)
cnv.cgContext.drawPDFPage(page)
})
let image = UIImage(data: imagedata)
I'm getting following issues with this ...
sometimes the image is nil.
When this runs, the usage of memory is very high.
With the page count(number of pages), usage of memory is very very high, and sometimes it goes to 1.4 GB and suddenly it crashes the app with the warning : Terminate due to memory waring . then I tried to run above code inside autoreleasepool. it did work but when the memory usage is more high (when it near to RAM size), again app crashes with above warning.
How can I avoid this memory warning and get the quality image form pdf page. hope any help. have a nice day.
If you are facing this issue then try this:
autoreleasepool {
guard let page = document.page(at: i) else { return }
// Fetch the page rect for the page we want to render.
let pageRect = page.getBoxRect(.mediaBox)
var dpi: CGFloat = 1.0
if pageRect.size.width > pageRect.size.height {
dpi = 3508.0 / pageRect.size.width
} else {
dpi = 3508.0 / pageRect.size.height
}
//dpi = 300
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let renderer = UIGraphicsImageRenderer(size: CGSize(width: pageRect.size.width * dpi, height: pageRect.size.height * dpi), format: format)
let imagedata = renderer.jpegData(withCompressionQuality: 1.0, actions: { cnv in
UIColor.white.set()
cnv.fill(pageRect)
cnv.cgContext.translateBy(x: 0.0, y: pageRect.size.height * dpi)
cnv.cgContext.scaleBy(x: dpi, y: -dpi)
cnv.cgContext.drawPDFPage(page)
})
let image = UIImage(data: imagedata)
}
autoreleasepool - for permanent memory clearing
scale - so that images for different devices are not created, which increases their resolution by 2 or 3 times
changed the way to increase dpi as it can be initially more or less than 72
1) sometimes the image is nil.
Is there a reason that you are generating a jpeg data then converting to UIImage VS directly creating an uiimage (using func image(actions: (UIGraphicsImageRendererContext) -> Void) -> UIImage)?
If you really need to use the jpeg method, then don't directly instantiate from UIImage(data:), use CGImage's init?(jpegDataProviderSource source: CGDataProvider,
decode: UnsafePointer<CGFloat>?,
shouldInterpolate: Bool,
intent: CGColorRenderingIntent) then use UIImage(cgImage:) to get your UIImage instance
2) When this runs, the usage of memory is very high
Are you storing all images created per each page? If yes, then if you have a pdf of high number of pages then you will consume max memory at some point because of accumulated images. Why don't you store to disk each image created then releasing it afterwards so you don't accumulate the memory of storing all pages in memory.
Sharing the loop(assuming there is a loop) outside of this snippet could help solve your issue more

iOS 8 Load Images Fast From PHAsset

I have an app that lets people combine up to 4 pictures. However when I let them choose from their photos (up to 4) it can be very slow even when I set image quality to FastFormat. It will take 4 seconds (about 1 second per photo). On highest quality, 4 images takes 6 seconds.
Can you suggest anyway I get get the images out faster?
Here is the block where I process images.
func processImages()
{
_selectediImages = Array()
_cacheImageComplete = 0
for asset in _selectedAssets
{
var options:PHImageRequestOptions = PHImageRequestOptions()
options.synchronous = true
options.deliveryMode = PHImageRequestOptionsDeliveryMode.FastFormat
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize:CGSizeMake(CGFloat(asset.pixelWidth), CGFloat(asset.pixelHeight)), contentMode: .AspectFit, options: options)
{
result, info in
var minRatio:CGFloat = 1
//Reduce file size so take 1/3 the screen w&h
if(CGFloat(asset.pixelWidth) > UIScreen.mainScreen().bounds.width/2 || CGFloat(asset.pixelHeight) > UIScreen.mainScreen().bounds.height/2)
{
minRatio = min((UIScreen.mainScreen().bounds.width/2)/(CGFloat(asset.pixelWidth)), ((UIScreen.mainScreen().bounds.height/2)/CGFloat(asset.pixelHeight)))
}
var size:CGSize = CGSizeMake((CGFloat(asset.pixelWidth)*minRatio),(CGFloat(asset.pixelHeight)*minRatio))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
result.drawInRect(CGRectMake(0, 0, size.width, size.height))
var final = UIGraphicsGetImageFromCurrentImageContext()
var image = iImage(uiimage: final)
self._selectediImages.append(image)
self._cacheImageComplete!++
println(self._cacheImageComplete)
if(self._cacheImageComplete == self._selectionCount)
{
self._processingImages = false
self.selectionCallback(self._selectediImages)
}
}
}
}
Don't resize the images yourself — part of what PHImageManager is for is to do that for you. (It also caches the thumbnail images so that you can get them more quickly next time, and shares that cache across apps so that you don't end up with half a dozen apps creating half a dozen separate 500MB thumbnail caches of your whole library.)
func processImages() {
_selectediImages = Array()
_cacheImageComplete = 0
for asset in _selectedAssets {
let options = PHImageRequestOptions()
options.deliveryMode = .FastFormat
// request images no bigger than 1/3 the screen width
let maxDimension = UIScreen.mainScreen().bounds.width / 3 * UIScreen.mainScreen().scale
let size = CGSize(width: maxDimension, height: maxDimension)
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize: size, contentMode: .AspectFill, options: options)
{ result, info in
// probably some of this code is unnecessary, too,
// but I'm not sure what you're doing here so leaving it alone
self._selectediImages.append(result)
self._cacheImageComplete!++
println(self._cacheImageComplete)
if self._cacheImageComplete == self._selectionCount {
self._processingImages = false
self.selectionCallback(self._selectediImages)
}
}
}
}
}
Notable changes:
Don't ask for images synchronously on the main thread. Just don't.
Pass a square maximum size to requestImageForAsset and use the AspectFill mode. This will get you an image that crops to fill that square no matter what its aspect ratio is.
You're asking for images by their pixel size here, and the screen size is in points. Multiply by the screen scale or your images will be pixelated. (Then again, you're asking for FastFormat, so you might get blurry images anyway.)
Why did you say synchronous? Obviously that's going to slow things way down. Moreover, saying synchronous on the main thread is absolutely forbidden!!!! Read the docs and obey them. That is the primary issue here.
There are then many other considerations. Basically you're using this call all wrong. Once you've removed the synchronous, do not process the image like that! Remember, this callback is going to be called many times as the image is provided in better and better versions. You must not do anything time-consuming here.
(Also, why are you resizing the image? If you wanted the image at a certain size, you should have asked for that size when you requested it. Let the image-fetcher do the work for you.)

How To Properly Compress UIImages At Runtime

I need to load 4 images for simultaneous editing. When I load them from the users library, the memory exceeds 500mb and crashes.
Here is a log from a raw allocations dump before I did any compression attempts:
Code:
var pickedImage = UIImage(data: imageData)
Instrument:
I have read several posts on compressing UIImages. I have tried reducing the UIImage:
New Code:
var pickedImage = UIImage(data: imageData, scale:0.1)
Instrument:
Reducing the scale of the UIImage had NO EFFECT?! Very odd.
So now I tried creating a JPEG compression based on the full UIImage
New code:
var pickedImage = UIImage(data: imageData)
var compressedData:NSData = UIImageJPEGRepresentation(pickedImage,0)
var compressedImage:UIImage = UIImage(data: compressedData)!//this is now used to display
Instrument:
Now, I suspect because I am converting the image its still being loaded. And since this is all occuring inside a callback from PHImageManager, I need a way to create a compressed UIImage from the NSData, but the setting the scale to 0.1 did NOTHING.
So any suggestions as to how I can compress this UIImage right from the NSData would be life saving!!!
Thanks
I ended up hard coding a size reduction before processing the image. Here is the code:
PHImageManager.defaultManager().requestImageForAsset(asset, targetSize:CGSizeMake(CGFloat(asset.pixelWidth), CGFloat(asset.pixelHeight)), contentMode: .AspectFill, options: options)
{
result, info in
var minRatio:CGFloat = 1
//Reduce file size so take 1/2 UIScreen.mainScreen().bounds.width/2 || CGFloat(asset.pixelHeight) > UIScreen.mainScreen().bounds.height/2)
{
minRatio = min((UIScreen.mainScreen().bounds.width/2)/(CGFloat(asset.pixelWidth)), ((UIScreen.mainScreen().bounds.height/2)/CGFloat(asset.pixelHeight)))
}
var size:CGSize = CGSizeMake((CGFloat(asset.pixelWidth)*minRatio),(CGFloat(asset.pixelHeight)*minRatio))
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
result.drawInRect(CGRectMake(0, 0, size.width, size.height))
var final = UIGraphicsGetImageFromCurrentImageContext()
var image = iImage(uiimage: final)
}
The reason you're having crashes and seeing such high memory usage is that you are missing the call to UIGraphicsEndImageContext(); -- so you are leaking memory like crazy.
For every call to UIGraphicsBeginImageContextWithOptions, make sure you have a call to UIGraphicsEndImageContext (after UIGraphicsGetImage*).
Also, you should wrap in #autorelease (I'm presuming you're using ARC), otherwise you'll still have out-of-memory crashes if you are rapidly processing images.
Do it like this:
#autorelease {
UIGraphicsBeginImageContextWithOptions(...);
..
something = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}

Resources