Problems accurately timing the loading of a image from file into UIImageView - ios

I am trying to measure the time taken to load a large photo (JPEG) from file into an UIImageView on iOS 8.0.
My current code:
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
#IBAction func loadImage(sender: UIButton) {
if let imageFile = NSBundle.mainBundle().pathForResource("large_photo", ofType: "jpg") {
// start our timer
let tick = Tick()
// loads a very large image file into imageView
// the test photo used is a 4608 × 3456 pixel JPEG
// using contentsOfFile: to prevent caching while testing timer
imageView.image = UIImage(contentsOfFile: imageFile)
// stop our timer and print execution time
tick.tock()
}
}
}
class Tick {
let tickTime : NSDate
init () {
tickTime = NSDate()
}
func tock () {
let tockTime = NSDate()
let executionTime = tockTime.timeIntervalSinceDate(tickTime)
println("[execution time]: \(executionTime)")
}
}
When I load a very large image (4608 x 3456 JPEG) on my test device (5th gen iPod touch), I can see that the execution time is ~2-3 seconds and blocks the main thread. This is observable by the fact that the UIButton remains in a highlighted state for this period of time and no other UI elements allow interaction.
I would therefore expect my timing function to report a time of ~2-3 seconds. However, it reports a time of milliseconds - eg:
[execution time]: 0.0116159915924072
This tick.tock() prints the message to the Console before the image is loaded. This confuses me, as the main thread appears blocked until after the image is loaded.
This leads me to ask the following questions:
if the image is being loaded asynchronously in the background, then
why is user interaction/main thread blocked?
if the image is being loaded on the main thread, why does the
tick.tock() function print to the console before the image is
displayed?

There are 2 parts to what you are measuring here:
Loading the image from disk:
UIImage(contentsOfFile: imageFile)
And decompressing the image from a JPEG to a bitmap to be displayed:
imageView.image = ....
The first part involves actually retrieving the compressed JPEG data from the disk (disk I/O) and creating a UIImage object. The UIImage object holds a reference to the compressed data, until it needs to be displayed. Only at the moment that it's ready to be rendered to the screen does it decompress the image into a bitmap to display (on the main thread).
My guess is that your timer is only catching the disk load part, and the decompression is happening on the next runloop. The decompression of an image that size is likely to take a while, probably the lions share of the time.
If you want to explicitly measure how long the decompression takes, you'll need to do it manually, by drawing the image to an off screen context, like so:
let tick = Tick()
// Load the image from disk
let image = UIImage(contentsOfFile: imageFile)
// Decompress the image into a bitmap
var newImage:UIImage;
UIGraphicsBeginImageContextWithOptions(image.size, true, 0);
image.drawInRect(CGRect(x:0,y:0,width:image.size.width, height:image.size.height))
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
tick.tock()
Here we are replicating the decompression that would happen when you assigned the image to the imageView.image
A handy trick to keep the UI responsive when dealing with images this size is to kick the whole process onto a background thread. This works well because once you have manually decompressed the image, UIKit detects this and doesn't repeat the process.
// Switch to background thread
dispatch_async(dispatch_get_global_queue(Int(DISPATCH_QUEUE_PRIORITY_DEFAULT.value), 0)) {
// Load the image from disk
let image = UIImage(contentsOfFile: imageFile)
// Ref to the decompressed image
var newImage:UIImage;
// Decompress the image into a bitmap
UIGraphicsBeginImageContextWithOptions(image.size, true, 0);
image.drawInRect(CGRect(x:0,y:0,width:image.size.width, height:image.size.height))
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Switch back to main thread
dispatch_async(dispatch_get_main_queue()) {
// Display the decompressed image
imageView.image = newImage
}
}
A disclaimer: The code here has not been fully tested in Xcode, but it's 99% correct if you decide to use it.

I would try to time this using a unit test, since the XCTest framework provides some good performance measurement tools. I think this approach would get around the lazy loading issues... although I'm not 100% on it.
func testImagePerformance() {
let date = NSDate()
measureBlock() {
if let imageFile = NSBundle.mainBundle().pathForResource("large_photo", ofType: "jpg") {
imageView.image = UIImage(contentsOfFile: imageFile)
}
}
}
(Just an aside, you mentioned that the loading blocks the main app thread... you should look into using an NSOperationQueue to make sure that doesn't happen... you probably already know that: http://nshipster.com/nsoperation/)

Related

Can memory in my code be better managed when using UIImageView.animationImages and startAnimating()?

I'm declaring some UIImage arrays:
var animationImages1: [UIImage] = []
var animationImages2: [UIImage] = []
I'm using a background thread to load the images:
DispatchQueue.global(qos: .background).async { () -> Void in
self.animationImages1 = self.createImageArray(total: 57, imagePrefix: "animation1")
self.animationImages2 = self.createImageArray(total: 42, imagePrefix: "animation2")
}
The function called above:
var imageArray: [UIImage] = []
for imageCount in 1..<total {
var imageName = String(format: "\(imagePrefix)\(imageCount)")
let image = UIImage(contentsOfFile: Bundle.main.path(forResource: imageName, ofType: "png")!)!
//let image = UIImage(named: imageName)!
imageArray.append(image)
}
return imageArray
Then when I want to animate, I'm calling this function:
func animate(imageView: UIImageView, images: [UIImage], duration: TimeInterval) {
imageView.animationImages = images
imageView.animationDuration = duration
imageView.animationRepeatCount = 1
imageView.startAnimating()
}
I tried doing both theImageView.stopAnimating() and theImageView.animationImages = nil before calling the animation again but didn't notice any improvement to memory management using either.
With UIImage(named:) the images visible in the app start disappearing either partially or completely as memory runs low. With UIImage(contentsOfFile:) the app promptly crashes once memory runs low.
To note: I tried UIImage(named:) with the images in the Assets catalog, and then switched to UIImage(contentsOfFile:) with the images dragged in to the project outside of Assets.xcassets
Is it possible to use this function of UIImageView for longer animations (2-5 seconds: 40-150 pngs) with a file size of about 450k each, or is it too much a memory strain regardless of how you go about it?
It currently runs without an issue on a newer iPad Pro, and using Xcode's simulator it runs well (but eats a lot of memory) on all device sizes. On an iPhone X and an iPhone 8 Plus, it runs out of memory pretty early on - after playing through 5 to 10 animations or so.
Am I missing something, is it not possible, or do I need to do further research on ways to keep memory in check while running these large UIImage arrays through startAnimating()?
Memory usage is not going down. I must be caching this somewhere...
Thanks for any help!
I created a new Xcode project simplified to focus just on this issue, and was given a correct answer by #Stephan Schlecht here.
Although the memory hit doesn't occur when assigning the images to an array, and the memory hit only happens once the animation is played, still the only way to reclaim the memory seems to be removing all references to the image array by setting both the variable array and the animationImages property on the UIImageView to a blank array.
So in the particular example given in this question:
animationImages1 = []
imageView.animationImages = []

UIImageJPEGRepresentation using large amount of memory (Swift 3.0)

I'm trying to compress and get the NSdata from between 20 and 30 UIImages with a "for-loop" like this:
for theImage in selectedUIImages {
let data = UIImageJPEGRepresentation(image,0.5)
// doing something with the data
}
Tried on an iPhone 7 with no issues besides my app using upto 700MB of memory when going through the loop, but on an older iPhone I get the message:
*Message from debugger: Terminated due to memory issue.*
The main objective is to get the NSData from the UIImage so I can put the image in a dir for uploading. Let me explain:
The Amazon S3 Transfer utility wants a path/url to the image and therefore I need to make a path/url for the UIImage and the only way i know is to get it by:
data.write(to: URL(fileURLWithPath: localPath), options: .atomic)
Try using an autorelease pool:
autoreleasepool {
for theImage in selectedUIImages {
let data = UIImageJPEGRepresentation(image,0.5)
// doing something with the data
}
}
and move it in a background thread.
Because your app run out of memory.
You can save it to Document directory after compress then upload it to server one by one. So it not make your memory issue.
You can decrease the image size by decreasing a ratio parameter. You can use 0.3 instead 0.5.
for theImage in selectedUIImages {
let data = UIImageJPEGRepresentation(image,0.3)
// doing something with the data
}

Firebase Storage memory leak when downloading a number of photos from downloadURLs on UICollectionView

I uploaded some photos to Firebase Storage following the sample project on github.
Before using Firebase Storage, I was saving my photos to some other website. And when I download photos from those image URLs that I saved before on the other website, nothing is wrong and memory usage is reasonable. But when I paste images' URL links to corresponding childs on Firebase Database and then download from those URLs, I seem to have a terrible memory issue. For every ~200kb image, memory usage goes ~10mb up. Since I don't have this problem downloading images from other URLs, I believe this is a firebase specific issue. Does anyone else encounter the same memory issue? Any suggestions/help?
NOTE: I saved the URLs of the images to the firebase realtime database. I download URL links from there and give them to my photo collection view cells. Here is the code I wrote for my photo collection view cells:
class PhotosCollectionViewCell: UICollectionViewCell {
#IBOutlet weak var imageView: UIImageView!
private var downloadTask: FIRStorageDownloadTask!
var imageURL: String! {
didSet {
downloadTask = FIRStorage.storage().referenceForURL(imageURL).dataWithMaxSize(1*1024*1024) { (imageData, error) in
guard error == nil else { print(error); return }
if let imageData = imageData {
self.imageView.image = UIImage(data: imageData)
}
// imageView.kf_showIndicatorWhenLoading = true
// imageView.kf_setImageWithURL(NSURL(string: imageURL)!)
}
}
}
override func prepareForReuse() {
super.prepareForReuse()
imageView.image = nil
// imageView.kf_cancelDownloadTask()
downloadTask.cancel()
}
}
The only thing I want to solve is that I want to be able to download the images that I saved to Firebase Storage from their URLs that I also save in the real time database. One important fact is that kingfisher downloads images from URLs without having any memory issue. The problem just occurs when those image URLs are from firebase storage.
NOTE: I also get memory issue when download those images from Firebase Storage function. I know to some extent it's normal for memory usage to go up but my images in Firebase Storage are just about 200KB.
While you are only downloading 200kb blobs of data, it costs much more memory than that to display that as an image. While the "clarity" of the image may be compressed, it will still remain the same dimensions. An image usually requires 4 bytes per pixel - one for each of red, blue, green, and alpha. So a 2000x1000 pixel image requires ~8mb of memory, which is close to what you are describing. I would first ask you if you are caching the images, and if you are, instead cache the data. What will probably help you more though is to resize the image that you are displaying, unless you need it at full size. Use something like:
extension UIImage {
/// Returns a image that fills in newSize
func resizedImage(newSize: CGSize) -> UIImage {
// Guard newSize is different
guard self.size != newSize else { return self }
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0);
self.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
}
And do something like half of the dimensions of the original image, which will make it cost 4x less memory. Make sure to modify the original image and not to create a new image. Hope this helps.
I faced with the same problem, and one thing which helped me it is SDWebImage. It can load images from refs and direct url.

iOS image loader effect

I am using "SVProgressHUD" for loader. When I want to load an image from url I am using async process to load the image in background. After completing the download I am updating the UI. And for that time span I am using a placeholder on the imageview and a loader with "Please wait..". But I want to use a loader like "Instagram". That means, when an image is loading from online it loads a blurry image of the original image and upon downloading the image , the blurry image show the original image.
Can any one please suggest me how can I do this?
The stuff you are talking about is Progressive JPEG.
Progressive JPEG (PJPEG) is an image format that stores multiple,
individual “scans” of a photo, each with an increasing level of
detail. When put together, the scans create a full-quality image. The
first scan gives a very low-quality representation of the image, and
each following scan further increases the level of detail and quality.
When images are downloaded using PJPEG, we can render the image as
soon as we have the first scan. As later scans come through, we update
the image and re-render it at higher and higher quality.
So, you have to make images loadable in this modern progressive format.You can follow a approach where every image before saving on server just convert it into appropriate progressive format.There are several tools avaiable for different type of server. E.g jpegtran
To check whether image is progressive or not you can use this tool or tool2.You can search for online tool a lot of tool is available.
Now for iOS
You can follow this tutorial
Some library
Concorde
DFImageManager
Assuming you are in control of the hosting of the images you can host lower resolution images and make 2 fetches, one for the lower resolution, and one for the higher. If you make the lower resolution image significantly smaller in file size the fetch for it will finish before the fetch for the full image, and you populate the UIImageView with this image until such a time as you can replace it.
Purely example code would look like this:
let imageView = UIImageView()
let lowResOperation = NSBlockOperation {
guard let imageData = NSData(contentsOfURL: NSURL(string: "https://myhost.org/low-res-image")!),
image = UIImage(data: imageData) else { return }
if imageView.image == nil {
imageView.image = image
}
}
let highResOperation = NSBlockOperation {
guard let imageData = NSData(contentsOfURL: NSURL(string: "https://myhost.org/high-res-image")!),
image = UIImage(data: imageData) else { return }
imageView.image = image
}
let backgroundQueue = NSOperationQueue()
backgroundQueue.addOperations([lowResOperation, highResOperation], waitUntilFinished: true)

GPUImageView stop responding to "Filter Change" after two times

I'm probably missing something. I'm trying to change filter to my GPUImageView.It's actually working the first two times(sometimes only one time), and than stop responding to changes. I couldn't find a way to remove the target from my GPUImageView.
Code
for x in filterOperations
{
x.filter.removeAllTargets()
}
let f = filterOperations[randomIntInRange].filter
let media = GPUImagePicture(image: self.largeImage)
media?.addTarget(f as! GPUImageInput)
f.addTarget(g_View)
media.processImage()
Any suggestions? * Processing still image from my library
UPDATE
Updated Code
//Global
var g_View: GPUImageView!
var media = GPUImagePicture()
override func viewDidLoad() {
super.viewDidLoad()
media = GPUImagePicture(image: largeImage)
}
func changeFilter(filterIndex : Int)
{
media.removeAllTargets()
let f = returnFilter(indexPath.row) //i.e GPUImageSepiaFilter()
media.addTarget(f as! GPUImageInput)
f.addTarget(g_View)
//second Part
f.useNextFrameForImageCapture()
let sema = dispatch_semaphore_create(0)
imageSource.processImageWithCompletionHandler({
dispatch_semaphore_signal(sema)
return
})
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER)
let img = f.imageFromCurrentFramebufferWithOrientation(img.imageOrientation)
if img != nil
{
//Useable - update UI
}
else
{
// Something Went wrong
}
}
My primary suggestion would be to not create a new GPUImagePicture every time you want to change the filter or its options that you're applying to an image. This is an expensive operation, because it requires a pass through Core Graphics and a texture upload to the GPU.
Also, since you're not maintaining a reference to your GPUImagePicture beyond the above code, it is being deallocated as soon as you pass out of scope. That tears down the render chain and will lead to a black image or even crashes. processImage() is an asynchronous operation, so it may still be in action at the time you exit your above scope.
Instead, create and maintain a reference to a single GPUImagePicture for your image, swap out filters (or change the options for existing filters) on that, and target the result to your GPUImageView. This will be much faster, churn less memory, and won't leave you open to premature deallocation.

Resources