I get strange behaviour when I use the following code to load an image multiple times:
NSDictionary *options = #{GLKTextureLoaderOriginBottomLeft: #YES};
textureInfo = [GLKTextureLoader textureWithCGImage:[UIImage imageNamed:#"name"].CGImage
options:options
error:nil];
It works as expected when I run load the image the first time, but when I try to load the same image again it's drawn upside down.
I think this has to do with the fact that it's actually the same CGImage that gets passed to the texture loader because of the use of imageNamed:. The flip transformation is therefore applied a second time on the same image.
Is there a way to get around this issue?
I guess you could flip the image, and load it the first time when your program starts.
Or not use imageNamed. Or keep the texture in memory so you only have to load it once.
Related
I try to download an image from server by using the NSURLSessionDownloadTask(iOS 7 API), and inside of the completion block, I want to the original image to be resized and store locally. So I wrote the helper method to create the bitmap context and draw the image, then get the new image from UIGraphicsGetImageFromCurrentImageContext(). The problem is the image is never released every time I do this. However, if I don't use the context and image drawing, things just work fine and no memory increasing issue. There is no CGImageCreate/Release function called, so really nothing to manually release here, and nothing fixed by adding #autoreleasepool here. Is there any way to fix this? I really want to modify the original image after downloading and before storing.
Here is some snippets for the issue:
[self fetchImageByDownloadTaskWithURL:url completion:^(UIImage *image, NSError *error) {
UIImage *modifiedImage = [image resizedImageScaleAspectFitToSize:imageView.frame.size];
// save to local disk
// ...
}];
// This is the resize method in UIImage Category
- (UIImage *)resizedImageScaleAspectFitToSize:(CGSize)size
{
CGSize imageSize = [self scaledSizeForAspectFitToSize:size];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0);
CGRect imageRect = CGRectMake(0.0, 0.0, imageSize.width, imageSize.height);
[self drawInRect:imageRect]; // nothing will change if make it weakSelf
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
updates:
When I dig into with allocations instrument, I find out that the memory growth is related with "VM: CG raster data". In my storing method, I use the NSCache for a photo memory cache option before store it persistently, and the raster data eats a lot of memory if I use the memory cache. It seems like after the rendered image being cached, all drawing data is also alive in memory until I release all cached images. If I don't memory cache the image, then non of raster data that coming from my image category method will be alive in memory. I just can not figure out why the drawing data is not released after image is being cached? Shouldn't it being released after drawing?
new updates:
I still didn't figure out why raster data is not being released when image for drawing is alive, and there is no analyze warning about this for sure. So I guess I just have to not cache the huge image for drawing to fit the big size, and remove cached drawing images when I don't want to use them any more. If I call [UIImage imageNamed:] and make it drawing, it seems never being released with raster data together since the image is system cached. So I called [UIImage imageWithContentsOfFile:] instead. Eventually the memory performs well. Other memory growth are something called non-object in allocations instrument which I have no idea currently. The memory warning simulation will release the system cached image created by [UIImage imageNamed:]. But for raster data, I will give some more tests on tomorrow and see.
Try making your category method a class method instead. Perhaps the leak is the original CGImage data which you are overwriting when you call [self drawInRect:imageRect];.
I know that loading an image like this
UIImage *image = [UIImage imageNamed:#"img"];
will cache the image and loading it like this will not
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
people say the it will be faster to access the cached image because iOS will access it from memory and we will not have the overhead of reading and decoding the file. OK, I see that, but suppose I use the second non-cached method to load the image on a view that is a property, like this
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
self.imageView = [[UIImageView alloc] initWithImage:image];
isn't image already on memory? If I want to access it I simply do imageView.image and get it from memory.
I am probably tired but I cannot imagine a single use for the cached version or I am not understanding what this cache means.
Care to explain? Thanks.
Imagine that your image is some icon that you use in 7 different view controllers... You could either load the image once and then pass it to each VC or you could use imageNamed... Your choice.
From the documentation:
This method looks in the system caches for an image object with the specified name and returns that object if it exists. If a matching image object is not already in the cache, this method locates and loads the image data from disk or asset catelog, and then returns the resulting object. You can not assume that this method is thread safe.
Let's say you have an image in your app. When you need to use the image, you use this code:
UIImage *image = [UIImage imageWithContentsOfFile:#"img.png"];
iOS will then look for your image in the app bundle, load it into memory, and then decode it into the UIImage.
However, say you need 10 different objects to use the image and you load it similar to this:
for (ClassThatNeedsImage *object in objects) {
object.image = [UIImage imageWithContentsOfFile:#"img.png"];
}
(This isn't the best example since you could just load the image once and pass it to each of the objects. However, I have had more complex code where that is not an option.)
iOS will then look for the image 10 times, load it into memory 10 times, and then decode it 10 times. However, if you use imageNamed:
for (ClassThatNeedsImage *object in objects) {
object.image = [UIImage imageNamed:#"img"];
}
From Wikipedia:
In computing, a cache is a component that transparently stores data so that future requests for that data can be served faster.
The cache used by UIImage is stored in memory, which is much faster to access than the disk.
The first time through the loop, iOS looks in the cache to see if the image is stored there. Assuming you haven't loaded this image with imageNamed previously, it doesn't find it, so it looks for the image, loads it into memory, decodes it, and then copies it into the cache.
On the other iterations, iOS looks in the cache, finds the image, and copies it into the UIImage object, so it doesn't have to do any hard disk access at all.
If you are only going to use the image once in the lifetime of your app, use imageWithContentsOfFile:. If you are going to use the image multiple times, use imageNamed:.
I'm using CAKeyFrameAnimation to animate few png images. I cut them from one png image by using CGImageCreateWithImageInRect
When I run Analyze function in XCode it shows me potential memory leaks. They are beause of CGImageCreateWithImageInRect. But when I use CGImageRelease after the animation object is created, no images are shown (I know why, but when I don't use release, there are leaks).
Could someone explain me this memory issue situation? And what is the best solution? I was thinking about creating UIImages with CGImageRefs for each "cut". But CAKeyFrameAnimation uses field of CGImageRef so I thought it is not necessary to create that UIImages.
UIImage *source = [UIImage imageNamed:#"my_anim_actions.png"];
cutRect = CGRectMake(0*dimForImg.width,0*dimForImg.height,dimForImg.width,dimForImg.height);
CGImageRef image1 = CGImageCreateWithImageInRect([source CGImage], cutRect);
cutRect = CGRectMake(1*dimForImg.width,0*dimForImg.height,dimForImg.width,dimForImg.height);
CGImageRef image2 = CGImageCreateWithImageInRect([source CGImage], cutRect);
NSArray *images = [[NSArray alloc] initWithObjects:(__bridge id)image1, (__bridge id)image2, (__bridge id)image2, (__bridge id)image1, (__bridge id)image2, (__bridge id)image1, nil];
CAKeyframeAnimation *myAnimation = [CAKeyframeAnimation animationWithKeyPath: #"contents"];
myAnimation.calculationMode = kCAAnimationDiscrete;
myAnimation.duration = kMyTime;
myAnimation.values = images; // NSArray of CGImageRefs
[myAnimation setValue:#"ANIMATION_MY" forKey:#"MyAnimation"];
myAnimation.removedOnCompletion = NO;
myAnimation.fillMode = kCAFillModeForwards;
CGImageRelease(image1);CGImageRelease(image2);//YES or NO
Two solutions:
First: Use [UIImageView animationImages] to animate between multiple images.
Second: Store the images as Ivars and release them in dealloc of your class.
Not sure if this helps, but I ran into something similar where using CGImageCreateWithImageInRect to take sections of an image and then showing those in image views caused huge memory usages. And I found that one thing that improved memory usage a lot was encoding the sub-image into PNG data and re-decoding it back into an image again.
I know, that sounds ridiculous and completely redundant, right? But it helped a lot. According to the documentation of CGImageCreateWithImageInRect,
The resulting image retains a reference to the original image, which
means you may release the original image after calling this function.
It seems as though when the image is shown, the UI copies the image data, including copying the original image data that it references too; that's why it uses so much memory. But when you write it to PNG and back again, you create an independent image data, that is smaller and does not depend on the original. This is just my guess.
Anyway, try it and see if it helps.
I'm confused as to why so much converting between image formats is needed in iOS. For example, if I load a jpg into a UIImage and then want to do face detection on it, I need to create a CIImage to pass to the CIDetector. Doesn't this represent a hit in both memory and performance?
Is this some legacy thing between Core Graphics, Core Image and UIKit (and probably openGL ES but I don't work with that)? Is the hit trivial overall?
I'll do what I need to do but I'd like to understand more about this is needed. Also, I've run into issues sometimes doing conversions and get tangled in the differences between the formats.
Update
Ok - so I just got dinged again by my confusion over these formats (or the confusion OF these formats...). Wasted a half hour. Here is what I was doing:
Testing for faces in a local image, I created the needed CIImage with:
CIImage *ciImage = [image CIImage];
and was not getting any features back no matter what orientation I passed in. I know this particular image has worked with the CIDetectorTypeFace before and that I have run into trouble with the CIImage format. The tried creating the CCImage like this:
CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
and Face Detection works fine. Arrgh! I made sure with [image CIImage] that the resulting CIImage was not nil. So I'm confused. The first approach just gets a pointer while the second creates a new CIImage. Does that make the difference?
Digging into the UIImage.h file I see the following:
// returns underlying CGImageRef or nil if CIImage based
#property(nonatomic,readonly) CGImageRef CGImage;
// returns underlying CIImage or nil if CGImageRef based
#property(nonatomic,readonly) CIImage *CIImage;
So I guess that is the key - Developer Beware: test for nil...
The reason is in the conception. All of UIKit, CoreGraphics and CoreImage does three such fundamentally different things that there can't be a 'grand central unified image format'. Also, these frameworks cooperate well; that said, the conversion should be as optimized and as fast as possible, but of course image processing is always a relatively computationally expensive operation.
It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:#"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.
UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method
UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?
UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O
UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.
UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:
The CGImages behind a UIImage don't use alpha. (kCGImageAlphaNoneSkipLast). I thought that maybe they were faster to be drawn because my context WAS using alpha. So I changed the context to use kCGImageAlphaNoneSkipLast. This makes the drawing MUCH faster, UNLESS:
Drawing into a CGContextRef with a UIImage FIRST, makes ALL subsequent image drawing slow
I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.
Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.
Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered:
Homogenize your contexts & images with a pixel format! The root of the question I asked boiled down to the fact that the CGImages inside a UIImage were using THE SAME PIXEL FORMAT as my context. Therefore fast. The CGImages were a different format and therefore slow. Inspect your images with CGImageGetAlphaInfo to see which pixel format they use. I'm using kCGImageAlphaNoneSkipLast EVERYWHERE now as I don't need to work with alpha. If you don't use the same pixel format everywhere, when drawing an image into a context Quartz will be forced to perform expensive pixel-conversions for EACH pixel. = SLOW
USE CGLayers! These make offscreen-drawing performance much better. How this works is basically as follows. 1) create a CGLayer from the context using CGLayerCreateWithContext. 2) do any drawing/setting of drawing properties on THIS LAYER's CONTEXT which is gotten with CGLayerGetContext. READ any pixels or information from the ORIGINAL context. 3) When done, "stamp" this CGLayer back onto the original context using CGContextDrawLayerAtPoint.This is FAST as long as you keep in mind:
1) Release any CGImages created from a context (i.e. those created with CGBitmapContextCreateImage) BEFORE "stamping" your layer back into the CGContextRef using CGContextDrawLayerAtPoint. This creates a 3-4x speed increase when drawing that layer. 2) Keep your pixel format the same everywhere!! 3) Clean up CG objects AS SOON as you can. Things hanging around in memory seem to create strange situations of slowdown, probably because there are callbacks or checks associated with these strong references. Just a guess, but I can say that CLEANING UP MEMORY ASAP helps performance immensely.
I had a similar problem. My application has to redraw a picture almost as large as the screen size. The problem came down to drawing as fast as possible two images of the same resolution, neither rotated nor flipped, but scaled and positioned in different places of the screen each time. After all, I was able to get ~15-20 FPS on iPad 1 and ~20-25 FPS on iPad4. So... hope this helps someone:
Exactly as typewriter said, you have to use the same pixel format. Using one with AlphaNone gives a speed boost. But even more important, argb32_image call in my case did numerous calls converting pixels from ARGB to BGRA. So the best bitmapInfo value for me was (at the time; there is a probability that Apple can change something here in the future):
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
CGContextDrawImage may work faster if rectangle argument was made integral (by CGRectIntegral). Seems to have more effect when image is scaled by factor close to 1.
Using layers actually slowed down things for me. Probably something was changed since 2011 in some internal calls.
Setting interpolation quality for the context lower than default (by CGContextSetInterpolationQuality) is important. I would recommend using (IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow). Macros IS_RETINA_DISPLAY is taken from here.
Make sure you get CGColorSpaceRef from CGColorSpaceCreateDeviceRGB() or the like when creating context. Some performance issues were reported for getting fixed color space instead of requesting that of the device.
Inheriting view class from UIImageView and simply setting self.image to the image created from context proved useful to me. However, read about using UIImageView first if you want to do this, for it requires some changes in code logic (because drawRect: isn't called anymore).
And if you can avoid scaling your image at the time of actual drawing, try to do so. Drawing non-scaled image is significantly faster - unfortunately, for me that was not an option.