iOS Redrawing image to prevent deferred decompression resulting in a bigger image - ios

I've noticed some people redraw images on a CGContext to prevent deferred decompression and this has caused a bug in our app.
The bug is that the size of the image professes to remain the same but the CGImageDataProvider data has extra bytes appended to it.
For example, we have a 797x500 PNG image downloaded from the Internet, and the AsyncImageViewredraws and returns the redrawn image.
Here is the code:
UIImage *image = [[UIImage alloc] initWithData:data];
if (image)
{
// Log to compare size and data length...
NSLog(#"BEFORE: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Original code from AsyncImageView
//redraw to prevent deferred decompression
UIGraphicsBeginImageContextWithOptions(image.size, NO, image.scale);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Log to compare size and data length...
NSLog(#"AFTER: %f %f", image.size.width, image.size.height);
NSLog(#"LEN %ld", CFDataGetLength(CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))));
// Some other code...
}
The log shows as follows:
BEFORE: 797.000000 500.000000
LEN 1594000
AFTER: 797.000000 500.000000
LEN 1600000
I decided to print each byte one by one, and sure enough there were twelve 0s appended for each row.
Basically, the redrawing was causing the image data to be that of a 800x500 image. Because of this our app was looking at the wrong pixel when it wanted to look at the 797 * row + columnth pixel.
We're not using any big images so deferred decompression doesn't pose any problems, but should I decide to use this method to redraw images, there's a chance I might introduce a subtle bug.
Does anyone have a solution to this? Or is this a bug introduced by Apple and we can't really do anything?

As you've discovered, rows are padded out to a convenient size. This is generally to make vector algorithms more efficient. You just need to adapt to that layout if you're going to use CGImage this way. You need to call CGImageGetBytesPerRow to find out the actual number of bytes allocated, and then adjust your offsets based on that (bytesPerRow * row + column).
That's probably best for you, but if you need to get rid of the padding, you can do that by creating your own CGBitmapContext and render into it. That's a heavily covered topic around Stack Overflow if you're not familiar with it. For example: How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?

Related

the memory of using drawInRect to resize picture

recently i use PHAssets replace the old Assets in my project.However,when i use my app to scale some pictures ,i found it usually crashes.
i use the debug mode, found it is the memory problem.
i use the code below to resize picture
+(UIImage*)scaleRetangleToFitLen:(UIImage *)img sWidth:(float)wid sHeight:(float)hei{
CGSize sb = img.size;;
if (img.size.height/img.size.width > hei/wid) {
sb = CGSizeMake(wid,wid*img.size.height/img.size.width);
}else{
sb = CGSizeMake(img.size.width*hei/img.size.height,hei);
}
if (sb.width > img.size.width || sb.height > img.size.height) {
sb = img.size;
}
UIImage* scaledImage = nil;
UIGraphicsBeginImageContext(sb);
[img drawInRect:CGRectMake(0,0, sb.width, sb.height)];
scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
img = nil;
return scaledImage;
}
the memory will increase about 50M when the code
[img drawInRect:CGRectMake(0,0, sb.width, sb.height)]
runs and it will not be setted free even thought the method is finished.
the width and the height is 304*228 ,the image original size is about 3264*2448,the returned image is 304*228;it means the real image i wanted at last is just a 304*228 size image,however it takes 50+M memory..
Is there any way to free the memory the drawInRect: function takes?
(the #autoreleasepool does not work ~ 😢 😢)
When loading an image, iOS usually doesn't decompress it until it really needs to. So the image you pass into your function is most likely a JPEG or PNG image that iOS keeps in memory in it's compressed state. The moment you draw it, it will be decompressed first and therefore the memory increases significantly. I would expect an increase by 3264 x 2448 x 4 = 35MB (and not 50MB).
To get rid of the memory again, you will make sure you release all reference to the image you pass into your function. So the problem is outside the code you show in your question.
For a more specific answer, you'll need to show all the code that works with the original image.

UIImage Distortion from UIGraphicsBeginImageContext with larger files (pixel formats, codecs?)

I'm cropping UIImages with a UIBezierPath using UIGraphicsContext:
CGSize thumbnailSize = CGSizeMake(54.0f, 45.0f); // dimensions of UIBezierPath
UIGraphicsBeginImageContextWithOptions(thumbnailSize, NO, 0);
[path addClip];
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
But for some reason my images are getting stretched vertically (everything looks slightly long and skinny), and this effect is stronger the bigger my originalImage is. I'm sure the originalImages are perfectly fine before I do these operations (I've checked)
My images are all 9:16 (say 72px wide by 128px tall) if that matters.
I've seen UIGraphics creates a bitmap with an "ARGB 32-bit integer pixel format using host-byte order"; and I'll admit a bit of ignorance when it comes to pixel formats, but felt this MAY be relevant because I'm not sure if that's the same pixel format I use to encode the picture data.
No idea how relevant this is but here is the FULL processing pipeline:
I'm capturing using AVFoundation and I set my photoSettings as
NSDictionary *photoSettings = #{AVVideoCodecKey : AVVideoCodecH264};
capturing using captureStillImageAsynchronouslyFromConnection:.. then turning it into NSData using [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer]; then downsizing into thumbnail by creating a CGDataProviderRefWithCFData and converting to CGImageRef using CGImageSourceCreateThumbnailAtIndex and getting a UIImage from that.
Later, I once again turn it into NSData using UIImageJPEGRepresentation(thumbnail, 0.7) so I can store. And finally when I'm ready to display I call my own method detailed on top [self maskImage:[UIImage imageWithData:imageData] toPath:_thumbnailPath] and display it on a UIImageView and set contentMode = UIViewContentModeScaleAspectFit.
If the method I'm using to mask the UIImage with the UIBezierPath is fine, I may end up explicitly setting the photoOutput settings with [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil] and the I can probably use something like how to convert a CVImageBufferRef to UIImage and change a lot of my code... but I really rather not do that unless completely necessary since, as I've mentioned, I really don't know much about video encoding / all these graphical, low level objects.
This line:
[originalImage drawInRect:CGRectMake(0, originalImage.size.height/-3, thumbnailSize.width, originalImage.size.height)];
is a problem. You are drawing originalImage but you specify the width of thumbnailSize.width and the height of originalImage. This messes up the image's aspect ratio.
You need a width and a height based on the same image size. Pick one as needed to maintain the proper aspect ratio.

fast method to get RGB data from UIImage (photo library)

I would like to get a data array containing the RGB representation of a picture stored in the photo library (an ALAsset) on iOS (ios8 sdk).
I already tried this method :
get the a CGImage from ALAsset with [ALAssetRepresentation fullScreenImage]
draw the CGImage to a CGContext.
That method works, I get a pointer to rgb data, but this is really slow (there are 2 conversions). The final goal is to load the image quickly in a OpenGL texture.
My code to get an image from Photo library
ALAsset* currentPhotoAsset = (ALAsset*) [self.photoAssetList objectAtIndex:_currentPhotoAssetIndex];
ALAssetRepresentation *representation = [currentPhotoAsset defaultRepresentation];
//-> REALLY SLOW
UIImage *currentPhoto = [UIImage imageWithCGImage:[representation fullScreenImage]];
My code to draw on the CGContext :
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * textureWidth;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textureData, textureWidth, textureHeight,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
//--> THAT'S REALLY SLOW
CGContextDrawImage(context, CGRectMake(0, 0, textureWidth, textureHeight), cgimage);
CGContextRelease(context);
There is not much you can do but I if you find a way I would be happy to hear about it.
The thing is you need to decompress the image (jpg, png...) which is usually done by creating a CGImage (UIImage is just a wrapper around it). But then you are not allowed to get the data pointer directly from the CGImage but you need to copy them (the really slow draw call). Though then again if the target size and format are the same as the source this operation should be quite fast since the data should more or less simply be copied. On the other hand if your textureWidth and textureHeight are different then the image dimensions those pixels need to be interpolated and this function can become even a few times slower.
The only way out of this I see is to get some library to directly decompress the image from file and get the data pointer of that image. But I never had a performance issue for loading image textures (use a background thread).
Anyway if you are not doing something similar already how I use this is to get the image size, then find the POT (power of two) width and height that fills the image size. Then I create an empty texture with those POT dimensions and call sub image to pass the original image data to the texture. I use a custom texture class to handle this which also contains (generates) texture coordinates so the correct part of the texture is drawn to the frame buffer. Then this class is extended to support atlasing which is generally what you want to do when dealing with many images (textures).
I hope this info helps you in any way...

Can you load only a smaller rectangular portion of a larger on-disk image into memory?

On iOS and most mobile devices there is a restriction on the size of the image that you can load, due to memory contraints. Is it possible to have a large image on disk (say 5,000 pixels by 5,000 pixels) but only read a smaller rectangle within that image (say 100x100) into memory for display?
In other words, do you need to load the entire image into memory if you just want to see a small subsection of it? If it's possible to load just the smaller portion, how can we do this?
This way, one could save a lot of space like spritesheets do for repetitive content. It would be important to note that the overall goal is to minimize the file size so the large image should be compressed with jpeg or png or some other kind of compression. I suspect video formats are like this because you never load an entire video into the memory.
Although I have not utilized the techniques, you might find the following Apple Sample useful:
LargeImageDownsizing Sample
You could do something with mapped NSData like this:
UIImage *pixelDataForRect(NSString *fileName, const CGRect pixelRect)
{
// get the pixels from that image
uint32_t width = pixelRect.size.width;
uint32_t height = pixelRect.size.height;
// create the context
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef bitMapContext = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, height);
CGContextConcatCTM(bitMapContext, flipVertical);
// render the image (assume PNG compression)
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) [NSData dataWithContentsOfMappedFile:fileName]);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, YES, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
uint32_t imageWidth = CGImageGetWidth(image);
uint32_t imageHeight = CGImageGetHeight(image);
CGRect drawRect = CGRectMake(-pixelRect.origin.x, -((imageHeight - pixelRect.origin.y) - height), imageWidth, imageHeight);
CGContextDrawImage(bitMapContext, drawRect, image);
CGImageRelease(image);
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return retImage;
}
Your best bet is using UIScrollView with CATiledLayer.
Check out the "Designing Apps with Scroll Views presentation from WWDC 2010 for a description of how to do this:
https://developer.apple.com/videos/wwdc/2010/
The idea is to take your large image and chop it down into tiles, and then use a UIScrollView to provide your user with a scrollable view of the image, only loading those sections of the image that are necessary based on the position of the scrollview. This is accomplished using CATiledLayer.

Should retain count increase after an image rotation?

I'm using the following code to rotate an image
http://www.platinumball.net/blog/2010/01/31/iphone-uiimage-rotation-and-scaling/
that's one of the few image transformations that I do before uploading an image to the server, I also have some other transformations: normalize, crop, resize.
Each one of the transformations returns an (UIImage*) and I add those functions using a category. I use it like this:
UIImage *img = //image from camera;
img = [[[img normalize] rotate] scale] resize];
[upload img];
After selecting 3~4 photos from the camera and executing the same code each time I get a Memory Warning message in XCode.
I'm guessing I have a memory leak somewhere (even though im using ARC). I'm not very experienced using the xCode debugging tools, so I started printing the retain count after each method.
UIImage *img = //image from camera;
img = [img normalize];
img = [img rotate]; // retain count increases :(
img = [img scale];
img = [img resize];
The only operation that increases the retain count is the rotation. Is this normal?
The only operation that increases the retain count is the rotation. Is this normal?
It's quite possible that the UIGraphicsGetImageFromCurrentImageContext() call in your rotate function ends up retaining the image. If so, it almost certainly also autoreleases the image in keeping with the normal Cocoa memory management rules. Either way, you shouldn't worry about it. As long as your rotate function doesn't itself contain any unbalanced retain (or alloc, new, or copy) calls, you should expect to be free of leaks. If you do suspect a leak, it's better to track it down with Instruments than by watching retainCount yourself.

Resources