Hi I am resizing my image using code from
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
CGBitmapInfo bitMapInfo = CGImageGetBitmapInfo(imageRef);
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
bitMapInfo);
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
It is working as expected for normal images. However, it fails when I try to give it a png-8 image. I know it as a png-8 image from typing file image.png in the command line.
The output is
image.png: PNG image data, 800 x 264, 8-bit colormap, non-interlaced
The error message in the console is colorspace not supported.
After some googling, I realized that "indexed color spaces are not supported for bitmap graphics contexts."
Following some advice, instead of using the original colorspace, I changed it to
colorSpace = CGColorSpaceCreateDeviceRGB();
Now I am getting this new error:
CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 3-component color space; kCGImageAlphaNone; 2400 bytes/row.
FYI, my image is 800 px wide.
How can I resolve this issue? Thanks a lot!
I realized that the list of supported formats are here:
https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_context/dq_context.html#//apple_ref/doc/uid/TP30001066-CH203-BCIBHHBB
And there is none with 24bits/pixel.
So I ended using the accepted solution here:
iPhone: Changing CGImageAlphaInfo of CGImage
Related
I'm looking for a way to optimize my images by converting its color from 32bit to 16bit rather than just solely resize it. So this is what I'm doing:
- (UIImage *)optimizeImage:(UIImage *)image
{
float newWidth = image.size.width;
float newHeight = image.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, newWidth, newHeight, 5, newWidth * 4,
colorSpace, kCGImageAlphaNone | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGInterpolationQuality quality = kCGInterpolationHigh;
CGContextSetInterpolationQuality(context, quality);
CGImageRef srcImage = CGImageRetain(image.CGImage);
CGContextDrawImage(context, CGRectMake(0, 0, newWidth, newHeight),
srcImage);
CGImageRelease(srcImage);
CGImageRef dst = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *result = [UIImage imageWithCGImage:dst];
CGImageRelease(dst);
UIGraphicsEndImageContext();
return result;
}
The issue with this piece of code is that when I run it, I get this very error:
CGBitmapContextCreate: unsupported parameter combination: 5 integer
bits/component; 16 bits/pixel; 3-component color space;
kCGImageAlphaNone; 10392 bytes/row.
So my question is: what is the supported combination for CGBitmapContextCreate? What should I select for the bitmapInfo parameter in this situation? Please suggest.
So I've found my answer in Mac Developer Library. There's a table for the supported pixel formats and this is what I'm looking for:
RGB - 16 bpp, 5 bpc, kCGImageAlphaNoneSkipFirst
So I've changed my bitmapInfo and the context's created just fine. Hopefully this is useful for someone.
Firstly, I am converting an image original to gray scale and its successfully converted. But the problem is, how to convert gray scale back to original image by user touch on that place.
What I'm unable to understand is how to convert **Gray Scale to original **.
Here is my code ** Original to gray scale **
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
CGContextDrawImage(context, imageRect, [image CGImage]);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
return newImage;
}
Guidance needed. Thanks in advance.
You can't convert a gray scale image back to color because you no longer have any color information in the image data.
If you mean you have a color image that you're converting to gray scale, and then when the user taps you show a color version, then instead you need to hang on to the original image and show that one in color.
I am using fixes size images which are 98 by 98 pixels. What I ended up doing is created a blank 98 by 98 png in Photoshop and calling it rgboverlay.png. Then I just overlay my grayscale image on top of the blank one and the resulting image is RGB. Here's the code. I originally got the code to overlay one image on another.
originalimage = your grayscale image
static UIImage* temp = nil;
thumb = [UIImage imageNamed:#"rgboverlay.png"];
CGSize size = CGSizeMake(98, 98);
UIGraphicsBeginImageContext(size);
CGPoint tempPoint = CGPointMake(0, 25 - temp.size.height / 2);
[temp drawAtPoint:testPoint];
CGPoint starredPoint = CGPointMake(1, 1);
[originalimage drawAtPoint:starredPoint];
// result is the new RGB image
result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This ended up working for me.
I have an app using the camera to take pictures. As soon as the picture is taken, I reduce the size of the image coming from the camera.
Running the method for reducing the size of the image, makes the memory usage peaks from 21 MB to 61 MB, sometimes near 69MB!
I have added #autoreleasepool to every method involved in this process. Things improved a little bit, but not as much as I expected. I don't expect the memory usage jump 3 times when reducing an image, specially because the new image being produced is smaller.
These are the methods I have tried:
- (UIImage*)reduceImage:(UIImage *)image toSize:(CGSize)size {
#autoreleasepool {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, size.width, size.height), image.CGImage);
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
}
and also
- (UIImage *)reduceImage:(UIImage *)image toSize:(CGSize)size {
#autoreleasepool {
UIGraphicsBeginImageContext(size);
[image drawInRect:rect];
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
}
no difference at all between these two.
NOTE: the original image is 3264x2448 pixels x 4 bytes/pixel = 32MB and the final image is 1136x640, that is 2.9MB... sum both numbers and you get 35MB, not 70!
Is there a way to reduce the size of the image without making the memory usage peak to stratosphere? thanks.
BTW and out of curiosity: is there a way to reduce an image dimension without using Quartz?
The answer is here
Uses CoreGraphics and uses 30~40% less memory.
#import <ImageIO/ImageIO.h>
-(UIImage*) resizedImageToRect:(CGRect) thumbRect
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
added as a category to UIImage.
I am trying to access the pixels of a certain image which has been resized using this block:
-(UIImage *)imageResize:(UIImage *)imageResizable scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[imageResizable drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
... where newSize is the size of an UIView, where I am trying to fit this image.
Now, I am supposed to access the pixels of this image, and do some filtering on it.
I use the following code block:
-(UIImage*) filter : (UIImage *) image
{
CGImageRef imageBuffTarget = [image CGImage];
CFMutableDataRef pixelDataTarget = CFDataCreateMutableCopy(0, 0, CGDataProviderCopyData(CGImageGetDataProvider(imageBuffTarget)));
NSUInteger width2 = CGImageGetWidth(imageBuffTarget);
NSUInteger height2 = CGImageGetHeight(imageBuffTarget);
UInt8 *target_image = (UInt8 *)CFDataGetMutableBytePtr(pixelDataTarget);
// Going forward, I want to do some processing here, on the *target_image data.
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo1 = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
CGImageRef imageRef = CGBitmapContextCreateImage (context);
UIImage *newimage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
CFRelease(imageRef);
return newimage;
}
I take the image from the UIView, pass it on to the 'filter' method and set it back to the view.
But, on doing this, the app crashes and I get the following error in the console:
<Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 1280 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedLast.
<Error>: CGBitmapContextCreateImage: invalid context 0x0
When I change:
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
TO
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), 2*CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
(2 multiplied to the 'bytes per row', which does exceed 1280)
the app doesn't crash, but the output on the view comes out to be a distorted and skewed version of the original image.
Please note that when I call CGImageGetHeight(imageBuffTarget) and CGImageGetWidth(imageBuffTarget), I get the exact height and width of the ImageView whose size I passed into the imageResize method.
Could you please help me figuring out the mistake in this code.
Thanks in advance.
I want to take allow the user to take a picture and then show the greyscale version. However, it is very slow because the image file is too big/resolution is too high.
How can I reduce the quality of the image when the user takes the picture?
Heres the code I am using for the transformation:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
/* changes start here */
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [image CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
UIImage *grayScaleImage = [UIImage imageWithCGImage:CGImageCreateWithMask(grayImage, mask) scale:image.scale orientation:image.imageOrientation];
// release the CG images
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
/* changes end here */
}
How about downsampling the UIImage before passing it on to the grayscale translation? Something like:
NSData *imageAsData = UIImageJPEGRepresentation(imageFromCamera, 0.5);
UIImage *downsampledImaged = [UIImage imageWithData:imageAsData];
You could use other compression qualities other than 0.5 of course.
If you are using AVFoundation to capture the image you can set the quality of the image to be captured by changing the capture session preset like the following:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetLow;
There is a table of which presents correspond to which resolution in the AVFoundation Programming Guide.