I want to take allow the user to take a picture and then show the greyscale version. However, it is very slow because the image file is too big/resolution is too high.
How can I reduce the quality of the image when the user takes the picture?
Heres the code I am using for the transformation:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
/* changes start here */
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [image CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
UIImage *grayScaleImage = [UIImage imageWithCGImage:CGImageCreateWithMask(grayImage, mask) scale:image.scale orientation:image.imageOrientation];
// release the CG images
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
/* changes end here */
}
How about downsampling the UIImage before passing it on to the grayscale translation? Something like:
NSData *imageAsData = UIImageJPEGRepresentation(imageFromCamera, 0.5);
UIImage *downsampledImaged = [UIImage imageWithData:imageAsData];
You could use other compression qualities other than 0.5 of course.
If you are using AVFoundation to capture the image you can set the quality of the image to be captured by changing the capture session preset like the following:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetLow;
There is a table of which presents correspond to which resolution in the AVFoundation Programming Guide.
Related
Firstly, I am converting an image original to gray scale and its successfully converted. But the problem is, how to convert gray scale back to original image by user touch on that place.
What I'm unable to understand is how to convert **Gray Scale to original **.
Here is my code ** Original to gray scale **
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
CGContextDrawImage(context, imageRect, [image CGImage]);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
return newImage;
}
Guidance needed. Thanks in advance.
You can't convert a gray scale image back to color because you no longer have any color information in the image data.
If you mean you have a color image that you're converting to gray scale, and then when the user taps you show a color version, then instead you need to hang on to the original image and show that one in color.
I am using fixes size images which are 98 by 98 pixels. What I ended up doing is created a blank 98 by 98 png in Photoshop and calling it rgboverlay.png. Then I just overlay my grayscale image on top of the blank one and the resulting image is RGB. Here's the code. I originally got the code to overlay one image on another.
originalimage = your grayscale image
static UIImage* temp = nil;
thumb = [UIImage imageNamed:#"rgboverlay.png"];
CGSize size = CGSizeMake(98, 98);
UIGraphicsBeginImageContext(size);
CGPoint tempPoint = CGPointMake(0, 25 - temp.size.height / 2);
[temp drawAtPoint:testPoint];
CGPoint starredPoint = CGPointMake(1, 1);
[originalimage drawAtPoint:starredPoint];
// result is the new RGB image
result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This ended up working for me.
I have an app using the camera to take pictures. As soon as the picture is taken, I reduce the size of the image coming from the camera.
Running the method for reducing the size of the image, makes the memory usage peaks from 21 MB to 61 MB, sometimes near 69MB!
I have added #autoreleasepool to every method involved in this process. Things improved a little bit, but not as much as I expected. I don't expect the memory usage jump 3 times when reducing an image, specially because the new image being produced is smaller.
These are the methods I have tried:
- (UIImage*)reduceImage:(UIImage *)image toSize:(CGSize)size {
#autoreleasepool {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0.0, size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, size.width, size.height), image.CGImage);
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
}
and also
- (UIImage *)reduceImage:(UIImage *)image toSize:(CGSize)size {
#autoreleasepool {
UIGraphicsBeginImageContext(size);
[image drawInRect:rect];
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
}
no difference at all between these two.
NOTE: the original image is 3264x2448 pixels x 4 bytes/pixel = 32MB and the final image is 1136x640, that is 2.9MB... sum both numbers and you get 35MB, not 70!
Is there a way to reduce the size of the image without making the memory usage peak to stratosphere? thanks.
BTW and out of curiosity: is there a way to reduce an image dimension without using Quartz?
The answer is here
Uses CoreGraphics and uses 30~40% less memory.
#import <ImageIO/ImageIO.h>
-(UIImage*) resizedImageToRect:(CGRect) thumbRect
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
added as a category to UIImage.
I 've a PNG loaded into an UIImage. I want to get a portion of the image based on a path (i.e. it might not be a rectangular). Say, it might be some shape with arcs, etc. Like a drawing path.
What would be the easiest way to do that?
Thanks.
I haven't run this, so it may not be perfect but this should give you an idea.
UIImage *imageToClip = //get your image somehow
CGPathRef yourPath = //get your path somehow
CGImageRef imageRef = [imageToClip CGImage];
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, CGImageGetColorSpace(imageRef), kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextAddPath(context, yourPath);
CGContextClip(context);
CGImageRef clippedImageRef = CGBitmapContextCreateImage(context);
UIImage *clippedImage = [UIImage imageWithCGImage:clippedImageRef];//your final, masked image
CGImageRelease(clippedImageRef);
CGContextRelease(context);
The easiest way to add a category to the UIImage with follow method:
-(UIImage *)scaleToRect:(CGRect)rect{
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[self drawInRect:rect];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
I have an UIImage with some alpha values and want to make a gray version of it. I've been using the below, and it works for nonalpha parts of the image, however as alpha is not supported/turned off the alpha parts turn out black... How would I successfully turn alpha support on?
(I modified this from code floating around stackoverflow, to support other scales (read retina))
-(UIImage*)grayscaledVersion2 {
// Create image rectangle with current image width/height
const CGRect RECT = CGRectMake(0, 0, self.size.width * self.scale, self.size.height * self.scale);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, RECT.size.width, RECT.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// kCGImageAlphaNone = no alpha, kCGImageAlphaPremultipliedFirst/kCGImageAlphaFirst/kCGImageAlphaLast = crash
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, RECT, [self CGImage]);
// Create bitmap image info from pixel data in current context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage* imageGray = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
DLog(#"greyed %# (%f, %f %f) into %# (%f, %f %f)", self, self.scale, self.size.width, self.size.height, imageGray, imageGray.scale, imageGray.size.width, imageGray.size.height);
// Release colorspace, context and bitmap information
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CFRelease(imageRef);
return imageGray;
}
found this little code snippet that seems to do what i want, but im getting yelled at by xcode saying self.CGimage isnt a property of my view controller. (which makes sense since thats a UIimage property). What changes would i need to make to this code for it to be functional? Thanks!
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
UIImage* tempImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
CGImageRef maskingImage = [maskImage CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, maskImage.size.width, maskImage.size.height), maskingImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, image.size.width, image.size.height), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGContextRelease(mainViewContentContext);
CGImageRelease(mainViewContentBitmapContext);
maskingImage = nil;
CGImageRelease(maskingImage);
// return the image
return theImage;
}
Try replacing self.CGImage with image.CGImage.
Place this method in a UIImage category (or subclass).