found this little code snippet that seems to do what i want, but im getting yelled at by xcode saying self.CGimage isnt a property of my view controller. (which makes sense since thats a UIimage property). What changes would i need to make to this code for it to be functional? Thanks!
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage {
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
UIImage* tempImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
CGImageRef maskingImage = [maskImage CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, maskImage.size.width, maskImage.size.height), maskingImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, image.size.width, image.size.height), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGContextRelease(mainViewContentContext);
CGImageRelease(mainViewContentBitmapContext);
maskingImage = nil;
CGImageRelease(maskingImage);
// return the image
return theImage;
}
Try replacing self.CGImage with image.CGImage.
Place this method in a UIImage category (or subclass).
Related
I 've a PNG loaded into an UIImage. I want to get a portion of the image based on a path (i.e. it might not be a rectangular). Say, it might be some shape with arcs, etc. Like a drawing path.
What would be the easiest way to do that?
Thanks.
I haven't run this, so it may not be perfect but this should give you an idea.
UIImage *imageToClip = //get your image somehow
CGPathRef yourPath = //get your path somehow
CGImageRef imageRef = [imageToClip CGImage];
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, CGImageGetColorSpace(imageRef), kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextAddPath(context, yourPath);
CGContextClip(context);
CGImageRef clippedImageRef = CGBitmapContextCreateImage(context);
UIImage *clippedImage = [UIImage imageWithCGImage:clippedImageRef];//your final, masked image
CGImageRelease(clippedImageRef);
CGContextRelease(context);
The easiest way to add a category to the UIImage with follow method:
-(UIImage *)scaleToRect:(CGRect)rect{
// Create a bitmap graphics context
// This will also set it as the current context
UIGraphicsBeginImageContext(size);
// Draw the scaled image in the current context
[self drawInRect:rect];
// Create a new image from current context
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
// Pop the current context from the stack
UIGraphicsEndImageContext();
// Return our new scaled image
return scaledImage;
}
I am trying to do a pixel by pixel comparison of two UIImages and I need to retrieve the pixels that are different. Using this Generate hash from UIImage I found a way to generate a hash for a UIImage. Is there a way to compare the two hashes and retrieve the different pixels?
If you want to actually retrieve the difference, the hash cannot help you. You can use the hash to detect the likely presence of differences, but to get the actual differences, you have to use other techniques.
For example, to create a UIImage that consists of the difference between two images, see this accepted answer in which Cory Kilgor's illustrates the use of CGContextSetBlendMode with a blend mode of kCGBlendModeDifference:
+ (UIImage *) differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
CGImageRef topRef = [top CGImage];
CGImageRef bottomRef = [bottom CGImage];
// Dimensions
CGRect bottomFrame = CGRectMake(0, 0, CGImageGetWidth(bottomRef), CGImageGetHeight(bottomRef));
CGRect topFrame = CGRectMake(0, 0, CGImageGetWidth(topRef), CGImageGetHeight(topRef));
CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame, topFrame));
// Create context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL) {
printf("Error allocating color space.\n");
return NULL;
}
CGContextRef context = CGBitmapContextCreate(NULL,
renderFrame.size.width,
renderFrame.size.height,
8,
renderFrame.size.width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if(context == NULL) {
printf("Context not created!\n");
return NULL;
}
// Draw images
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextDrawImage(context, CGRectOffset(bottomFrame, -renderFrame.origin.x, -renderFrame.origin.y), bottomRef);
CGContextSetBlendMode(context, kCGBlendModeDifference);
CGContextDrawImage(context, CGRectOffset(topFrame, -renderFrame.origin.x, -renderFrame.origin.y), topRef);
// Create image from context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return image;
}
I am trying to access the pixels of a certain image which has been resized using this block:
-(UIImage *)imageResize:(UIImage *)imageResizable scaledToSize:(CGSize)newSize
{
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[imageResizable drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
... where newSize is the size of an UIView, where I am trying to fit this image.
Now, I am supposed to access the pixels of this image, and do some filtering on it.
I use the following code block:
-(UIImage*) filter : (UIImage *) image
{
CGImageRef imageBuffTarget = [image CGImage];
CFMutableDataRef pixelDataTarget = CFDataCreateMutableCopy(0, 0, CGDataProviderCopyData(CGImageGetDataProvider(imageBuffTarget)));
NSUInteger width2 = CGImageGetWidth(imageBuffTarget);
NSUInteger height2 = CGImageGetHeight(imageBuffTarget);
UInt8 *target_image = (UInt8 *)CFDataGetMutableBytePtr(pixelDataTarget);
// Going forward, I want to do some processing here, on the *target_image data.
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo1 = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
CGImageRef imageRef = CGBitmapContextCreateImage (context);
UIImage *newimage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
CFRelease(imageRef);
return newimage;
}
I take the image from the UIView, pass it on to the 'filter' method and set it back to the view.
But, on doing this, the app crashes and I get the following error in the console:
<Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 1280 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedLast.
<Error>: CGBitmapContextCreateImage: invalid context 0x0
When I change:
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
TO
CGContextRef context = CGBitmapContextCreate(target_image, CGImageGetWidth(imageBuffTarget), CGImageGetHeight(imageBuffTarget), CGImageGetBitsPerComponent(imageBuffTarget), 2*CGImageGetBytesPerRow(imageBuffTarget), colorSpaceRef, bitmapInfo1);
(2 multiplied to the 'bytes per row', which does exceed 1280)
the app doesn't crash, but the output on the view comes out to be a distorted and skewed version of the original image.
Please note that when I call CGImageGetHeight(imageBuffTarget) and CGImageGetWidth(imageBuffTarget), I get the exact height and width of the ImageView whose size I passed into the imageResize method.
Could you please help me figuring out the mistake in this code.
Thanks in advance.
I have UITableViewCell with image in the right size.
This is how the cell should look like:
And i have the backgound:
And the image placeholder:
And i want to know if there is a way to crop image with the iOS library?
Yes that possible:
UIImage *imageToCrop = ...;
UIGraphicsBeginImageContext();
CGContextRef context = UIGraphicsGetCurrentContext();
[imageToCrop drawAtPoint:CGPointZero];
CGContextAddEllipseInRect(context, CGRectMake(0 ,0, imageToCrop.size.width, imageToCrop.size.height);
CGContextClip(context);
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You can use CoreGraphics to add mask or clip with path. Mask is image with alpha channel which determines what part of image show. Below example how clip with image mask:
- (UIImage *)croppedImage:(UIImage *)sourceImage
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), NO, [UIScreen mainScreen].scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToMask(context, CGRectMake(0, 0, width, height), [UIImage imageNamed:#"mask"].CGImage);
[sourceImage drawInRect:CGRectMake(0, 0, width, height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
Then you can write cell.picture = [self croppedImage:sourceImage];
You can use image masking technique to crop this image
Please have a look at this link
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_images/dq_images.html#//apple_ref/doc/uid/TP30001066-CH212-CJBHIJEB
I have written some code that may help you out
#interface ImageRenderer : NSObject {
UIImage *image_;
}
#property (nonatomic, retain) UIImage * image;
- (void)cropImageinRect:(CGRect)rect;
- (void)maskImageWithMask:(UIImage *)maskImage;
- (void)imageWithAlpha;
#end
#implementation ImageRenderer
#synthesize image = image_;
- (void)cropImageinRect:(CGRect)rect {
CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, rect);
image_ = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
}
- (void)maskImageWithMask:(UIImage *)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext == NULL){
return;
}
CGFloat ratio = 0;
ratio = maskImage.size.width/ image_.size.width;
if(ratio * image_.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image_.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image_.size.width*ratio)-maskImage.size.width)/2 , -((image_.size.height*ratio)-maskImage.size.height)/2}, {image_.size.width*ratio, image_.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image_.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
image_ = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
}
- (void)imageWithAlpha {
CGImageRef imageRef = image_.CGImage;
CGFloat width = CGImageGetWidth(imageRef);
CGFloat height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(nil, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGImageRef resultImageRef = CGBitmapContextCreateImage(context);
image_ = [UIImage imageWithCGImage:resultImageRef scale:image_.scale orientation:image_.imageOrientation];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
CGImageRelease(resultImageRef);
}
#end
In this code you can crop the image out of a bigger one and then you can use a mask image to get your work done.
I want to take allow the user to take a picture and then show the greyscale version. However, it is very slow because the image file is too big/resolution is too high.
How can I reduce the quality of the image when the user takes the picture?
Heres the code I am using for the transformation:
- (UIImage *)convertImageToGrayScale:(UIImage *)image
{
// Create image rectangle with current image width/height
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
// Grayscale color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// Create bitmap content with current image size and grayscale colorspace
CGContextRef context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, colorSpace, kCGImageAlphaNone);
// Draw image into current context, with specified rectangle
// using previously defined context (with grayscale colorspace)
CGContextDrawImage(context, imageRect, [image CGImage]);
/* changes start here */
// Create bitmap image info from pixel data in current context
CGImageRef grayImage = CGBitmapContextCreateImage(context);
// release the colorspace and graphics context
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
// make a new alpha-only graphics context
context = CGBitmapContextCreate(nil, image.size.width, image.size.height, 8, 0, nil, kCGImageAlphaOnly);
// draw image into context with no colorspace
CGContextDrawImage(context, imageRect, [image CGImage]);
// create alpha bitmap mask from current context
CGImageRef mask = CGBitmapContextCreateImage(context);
// release graphics context
CGContextRelease(context);
// make UIImage from grayscale image with alpha mask
UIImage *grayScaleImage = [UIImage imageWithCGImage:CGImageCreateWithMask(grayImage, mask) scale:image.scale orientation:image.imageOrientation];
// release the CG images
CGImageRelease(grayImage);
CGImageRelease(mask);
// return the new grayscale image
return grayScaleImage;
/* changes end here */
}
How about downsampling the UIImage before passing it on to the grayscale translation? Something like:
NSData *imageAsData = UIImageJPEGRepresentation(imageFromCamera, 0.5);
UIImage *downsampledImaged = [UIImage imageWithData:imageAsData];
You could use other compression qualities other than 0.5 of course.
If you are using AVFoundation to capture the image you can set the quality of the image to be captured by changing the capture session preset like the following:
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetLow;
There is a table of which presents correspond to which resolution in the AVFoundation Programming Guide.