MonoTouch, drawing a cropping region layer over a UIImageView - ios

In our app we'd like to be able to crop, scale and pan an image, and I just can't seem to figure out how I am supposed to draw a cropping region on top of my UIImageView.
I tried messing with coregraphics, I could render a region with a black stroke on my image, but the image would flip. Not only that, but since I had drawn ON the image, I'm afraid that if I were to move and scale it using gestures, the region would be affected too!
A push in the right direction would be much appreciated!
Heres my code that doesen't really do what I want, to show some research effort.
// Aspect ration - Currently 1:1
const int arWidth = 1;
const int arHeight = 1;
UIGraphics.BeginImageContext(ImgToCrop.Frame.Size);
var context = UIGraphics.GetCurrentContext();
// Set the line width
context.SetLineWidth(4);
UIColor.Black.SetStroke();
// Our starting points.
float x = 0, y = 0;
// The sizes
float width = ImgToCrop.Frame.Width, height = ImgToCrop.Frame.Height;
// Calculate the geometry
if(arWidth == arHeight){
// The aspect ration is 1:1
width = ImgToCrop.Frame.Width;
height = width;
x = 0;
y = ImgToCrop.Frame.GetMidY()-height/2;
}
// The rect
var drawRect = new RectangleF(x,y,width,height);
context.DrawImage(new RectangleF(
ImgToCrop.Frame.X,
ImgToCrop.Frame.Y,
ImgToCrop.Frame.Width,
ImgToCrop.Frame.Height),ImgToCrop.Image.CGImage);
// Draw it
context.StrokeRect(drawRect);
ImgToCrop.Image = UIGraphics.GetImageFromCurrentImageContext();

Maybe this will help you:
public static UIImage ScaleToSize (UIImage image, int width, int height)
{
UIGraphics.BeginImageContext (new SizeF (width, height));
CGContext ctx = UIGraphics.GetCurrentContext ();
float ratio = (float) width / (float) height;
ctx.AddRect (new RectangleF (0.0f, 0.0f, width, height));
ctx.Clip ();
var cg = image.CGImage;
float h = cg.Height;
float w = cg.Width;
float ar = w / h;
if (ar != ratio) {
// Image's aspect ratio is wrong so we'll need to crop
float scaleY = height / h;
float scaleX = width / w;
PointF offset;
SizeF crop;
float size;
if (scaleX >= scaleY) {
size = h * (w / width);
offset = new PointF (0.0f, h / 2.0f - size / 2.0f);
crop = new SizeF (w, size);
} else {
size = w * (h / height);
offset = new PointF (w / 2.0f - size / 2.0f, 0.0f);
crop = new SizeF (size, h);
}
// Crop the image and flip it to the correct orientation (otherwise it will be upside down)
ctx.ScaleCTM (1.0f, -1.0f);
using (var copy = cg.WithImageInRect (new RectangleF (offset, crop))) {
ctx.DrawImage (new RectangleF (0.0f, 0.0f, width, -height), copy);
}
} else {
image.Draw (new RectangleF (0.0f, 0.0f, width, height));
}
UIImage scaled = UIGraphics.GetImageFromCurrentImageContext ();
UIGraphics.EndImageContext ();
return scaled;
}

Related

Xamarin IOS - rotating picture (UIImage) for 180 degrees around center point

Im trying to rotate UIImage by 180 degrees around its center point using CGAffineTransform but it is not working as i want :/
Code:
UIImage RotateImage(UIImage imageIn) {
int kMaxResolution = 2048;
CGImage imgRef = imageIn.CGImage;
float width = imgRef.Width;
float height = imgRef.Height;
CGAffineTransform transform = CGAffineTransform.MakeIdentity ();
RectangleF bounds = new RectangleF( 0, 0, width, height );
if ( width > kMaxResolution || height > kMaxResolution )
{
float ratio = width/height;
if (ratio > 1)
{
bounds.Width = kMaxResolution;
bounds.Height = bounds.Width / ratio;
}
else
{
bounds.Height = kMaxResolution;
bounds.Width = bounds.Height * ratio;
}
}
float scaleRatio = bounds.Width / width;
SizeF imageSize = new SizeF( width, height);
float boundHeight;
//This piece of code should make this UImage upside down
transform = CGAffineTransform.MakeTranslation (imageSize.Width, imageSize.Height);
transform = CGAffineTransform.Rotate(transform, (float)Math.PI);
//=============
UIGraphics.BeginImageContext(bounds.Size);
CGContext context = UIGraphics.GetCurrentContext ();
context.ScaleCTM(scaleRatio, -scaleRatio);
context.TranslateCTM(0, -height);
context.ConcatCTM(transform);
context.DrawImage (new RectangleF (0, 0, width, height), imgRef);
UIImage imageCopy = UIGraphics.GetImageFromCurrentImageContext ();
UIGraphics.EndImageContext ();
return imageCopy;
}
Image is scaled after this, but its not upside down. Why?
Thank you for your answers in advance.

Masking an image using bezierpath with image's full resolution

Hi, I have a path (shape) and a high-resolution image. I make the high res image to be AspectFit inside the view on which I draw the path and I want to mask the image with the path but at the full resolution of the image, not at the resolution in which we see the path. The problem, It works perfectly when I don't upscale them up for high-resolution masking but when I do, everything is messed up. The mask gets stretched and the origins don't make sense.
All I want is to be able to upscale the path with the same aspect ratio of the image (at the full resolution of the image) and position it correctly so It can mask the high res image properly.
I've tried this:
Masking CGContext with a CGPathRef?
and this
Creating mask with CGImageMaskCreate is all black (iphone)
and this
Clip UIImage to UIBezierPath (not masking)
none of which works correctly when I try to mask a high quality image (bigger than screen resolution)
EDIT I posted a working project to show the problem between normal quality masking (at screen's resolution) and high quality masking (at image's resolution) on github. I'd really appreciate any help.
https://github.com/Reza-Abdolahi/HighResMasking
If I understand your question correctly:
You have an image view containing an image that may have been scaled down (or even scaled up) using UIViewContentModeScaleAspectFit.
You have a bezier path whose points are in the geometry (coordinate system) of that image view.
And now you want to create a copy of the image, at its original resolution, masked by the bezier path.
We can think of the image as having its own geometry, with the origin at the top left corner of the image and one unit along each axis being one point. So what we need to do is:
Create a graphics renderer big enough to draw the image into without scaling. The geometry of this renderer is the image's geometry.
Transform the bezier path from the view geometry to the renderer geometry.
Apply the transformed path to the renderer's clip region.
Draw the image (untransformed) into the renderer.
Step 2 is the hard one, because we have to come up with the correct CGAffineTransform. In an aspect-fit scenario, the transform needs to not only scale the image, but possibly translate it along either the x axis or the y axis (but not both). But let's be more general and support other UIViewContentMode settings. Here's a category that lets you ask a UIImageView for the transform that converts points in the view's geometry to points in the image's geometry:
#implementation UIImageView (ImageGeometry)
/**
* Return a transform that converts points in my geometry to points in the
* image's geometry. The origin of the image's geometry is at its upper
* left corner, and one unit along each axis is one point in the image.
*/
- (CGAffineTransform)imageGeometryTransform {
CGRect viewBounds = self.bounds;
CGSize viewSize = viewBounds.size;
CGSize imageSize = self.image.size;
CGFloat xScale = imageSize.width / viewSize.width;
CGFloat yScale = imageSize.height / viewSize.height;
CGFloat tx, ty;
switch (self.contentMode) {
case UIViewContentModeScaleToFill: tx = 0; ty = 0; break;
case UIViewContentModeScaleAspectFit:
if (xScale > yScale) { tx = 0; ty = 0.5; yScale = xScale; }
else if (xScale < yScale) { tx = 0.5; ty = 0; xScale = yScale; }
else { tx = 0; ty = 0; }
break;
case UIViewContentModeScaleAspectFill:
if (xScale < yScale) { tx = 0; ty = 0.5; yScale = xScale; }
else if (xScale > yScale) { tx = 0.5; ty = 0; xScale = yScale; }
else { tx = 0; ty = 0; imageSize = viewSize; }
break;
case UIViewContentModeCenter: tx = 0.5; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeTop: tx = 0.5; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeBottom: tx = 0.5; ty = 1; xScale = yScale = 1; break;
case UIViewContentModeLeft: tx = 0; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeRight: tx = 1; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeTopLeft: tx = 0; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeTopRight: tx = 1; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeBottomLeft: tx = 0; ty = 1; xScale = yScale = 1; break;
case UIViewContentModeBottomRight: tx = 1; ty = 1; xScale = yScale = 1; break;
default: return CGAffineTransformIdentity; // Mode not supported by UIImageView.
}
tx *= (imageSize.width - xScale * (viewBounds.origin.x + viewSize.width));
ty *= (imageSize.height - yScale * (viewBounds.origin.y + viewSize.height));
CGAffineTransform transform = CGAffineTransformMakeTranslation(tx, ty);
transform = CGAffineTransformScale(transform, xScale, yScale);
return transform;
}
#end
Armed with this, we can write the code that masks the image. In my test app, I have a subclass of UIImageView named PathEditingView that handles the bezier path editing. So my view controller creates the masked image like this:
- (UIImage *)maskedImage {
UIImage *image = self.pathEditingView.image;
UIGraphicsImageRendererFormat *format = [[UIGraphicsImageRendererFormat alloc] init];
format.scale = image.scale;
format.prefersExtendedRange = image.imageRendererFormat.prefersExtendedRange;
format.opaque = NO;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:image.size format:format];
return [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
UIBezierPath *path = [self.pathEditingView.path copy];
[path applyTransform:self.pathEditingView.imageGeometryTransform];
CGContextRef gc = UIGraphicsGetCurrentContext();
CGContextAddPath(gc, path.CGPath);
CGContextClip(gc);
[image drawAtPoint:CGPointZero];
}];
}
And it looks like this:
Of course it's hard to tell that the output image is full-resolution. Let's fix that by cropping the output image to the bounding box of the bezier path:
- (UIImage *)maskedAndCroppedImage {
UIImage *image = self.pathEditingView.image;
UIBezierPath *path = [self.pathEditingView.path copy];
[path applyTransform:self.pathEditingView.imageGeometryTransform];
CGRect pathBounds = CGPathGetPathBoundingBox(path.CGPath);
UIGraphicsImageRendererFormat *format = [[UIGraphicsImageRendererFormat alloc] init];
format.scale = image.scale;
format.prefersExtendedRange = image.imageRendererFormat.prefersExtendedRange;
format.opaque = NO;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:pathBounds.size format:format];
return [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
CGContextRef gc = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(gc, -pathBounds.origin.x, -pathBounds.origin.y);
CGContextAddPath(gc, path.CGPath);
CGContextClip(gc);
[image drawAtPoint:CGPointZero];
}];
}
Masking and cropping together look like this:
You can see in this demo that the output image has much more detail than was visible in the input view, because it was generated at the full resolution of the input image.
As the secondary answer, I made it work with this code and for better understanding, you can get the working project on github here as well to see if it works on all cases or not.
my github project :
https://github.com/Reza-Abdolahi/HighResMasking
The part of code that solved the problem:
-(UIImage*)highResolutionMasking{
NSLog(#"///High quality (Image resolution) masking///////////////////////////////////////////////////");
//1.Rendering the path into an image with the size of _targetBound (which is the size of a device screen sized view in which the path is drawn.)
CGFloat aspectRatioOfImageBasedOnHeight = _highResolutionImage.size.height/ _highResolutionImage.size.width;
CGFloat aspectRatioOfTargetBoundBasedOnHeight = _targetBound.size.height/ _targetBound.size.width;
CGFloat pathScalingFactor = 0;
if ((_highResolutionImage.size.height >= _targetBound.size.height)||
(_highResolutionImage.size.width >= _targetBound.size.width)) {
//Then image is bigger than targetBound
if ((_highResolutionImage.size.height<=_highResolutionImage.size.width)) {
//The image is Horizontal
CGFloat newWidthForTargetBound =_highResolutionImage.size.width;
CGFloat ratioOfHighresImgWidthToTargetBoundWidth = (_highResolutionImage.size.width/_targetBound.size.width);
CGFloat newHeightForTargetBound = _targetBound.size.height* ratioOfHighresImgWidthToTargetBoundWidth;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.width/_targetBound.size.width;
}else if((_highResolutionImage.size.height > _highResolutionImage.size.width)&&
(aspectRatioOfImageBasedOnHeight <= aspectRatioOfTargetBoundBasedOnHeight)){
//The image is Vertical but has smaller aspect ratio (based on height) than targetBound
CGFloat newWidthForTargetBound =_highResolutionImage.size.width;
CGFloat ratioOfHighresImgWidthToTargetBoundWidth = (_highResolutionImage.size.width/_targetBound.size.width);
CGFloat newHeightForTargetBound = _targetBound.size.height* ratioOfHighresImgWidthToTargetBoundWidth;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.width/_targetBound.size.width;
}else if((_highResolutionImage.size.height > _highResolutionImage.size.width)&&
(aspectRatioOfImageBasedOnHeight > aspectRatioOfTargetBoundBasedOnHeight)){
CGFloat newHeightForTargetBound =_highResolutionImage.size.height;
CGFloat ratioOfHighresImgHeightToTargetBoundHeight = (_highResolutionImage.size.height/_targetBound.size.height);
CGFloat newWidthForTargetBound = _targetBound.size.width* ratioOfHighresImgHeightToTargetBoundHeight;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.height/_targetBound.size.height;
}else{
//Do nothing
}
}else{
//Then image is smaller than targetBound
_bigTargetBound = _imageRect;
pathScalingFactor =1;
}
CGSize correctedSize = CGSizeMake(_highResolutionImage.size.width *_scale,
_highResolutionImage.size.height *_scale);
_bigImageRect= AVMakeRectWithAspectRatioInsideRect(correctedSize,_bigTargetBound);
//Scaling path
CGAffineTransform scaleTransform = CGAffineTransformIdentity;
scaleTransform = CGAffineTransformScale(scaleTransform, pathScalingFactor, pathScalingFactor);
CGPathRef scaledCGPath = CGPathCreateCopyByTransformingPath(_examplePath.CGPath,&scaleTransform);
//Render scaled path into image
UIGraphicsBeginImageContextWithOptions(_bigTargetBound.size, NO, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath (context, scaledCGPath);
CGContextSetFillColorWithColor (context, [UIColor redColor].CGColor);
CGContextSetStrokeColorWithColor (context, [UIColor redColor].CGColor);
CGContextDrawPath (context, kCGPathFillStroke);
UIImage * pathImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"High res pathImage.size: %#",NSStringFromCGSize(pathImage.size));
//Cropping it from targetBound into imageRect
_maskImage = [self cropThisImage:pathImage toRect:_bigImageRect];
NSLog(#"High res _croppedRenderedPathImage.size: %#",NSStringFromCGSize(_maskImage.size));
//Masking the high res image with my mask image which both have the same size now.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [_maskImage CGImage];
CGContextRef myContext = CGBitmapContextCreate (NULL, _highResolutionImage.size.width, _highResolutionImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (myContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = _maskImage.size.width/ _highResolutionImage.size.width;
if(ratio * _highResolutionImage.size.height < _maskImage.size.height) {
ratio = _maskImage.size.height/ _highResolutionImage.size.height;
}
CGRect rectForMask = {{0, 0}, {_maskImage.size.width, _maskImage.size.height}};
CGRect rectForImageDrawing = {{-((_highResolutionImage.size.width*ratio)-_maskImage.size.width)/2 , -((_highResolutionImage.size.height*ratio)-_maskImage.size.height)/2},
{_highResolutionImage.size.width*ratio, _highResolutionImage.size.height*ratio}};
CGContextClipToMask(myContext, rectForMask, maskImageRef);
CGContextDrawImage(myContext, rectForImageDrawing, _highResolutionImage.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(myContext);
CGContextRelease(myContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
-(UIImage *)cropThisImage:(UIImage*)image toRect:(CGRect)rect{
CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *croppedImage = [UIImage imageWithCGImage:subImage];
CGImageRelease(subImage);
return croppedImage;
}

Xamarin UIImage rotation

I try to rotate image and save it, when i save image as it, it works, but when i try to rotate the image, rotation works but image is blank.
public UIImage RotateImage (UIImage originalImage, int rotationAngle)
{
UIImage rotatedImage = originalImage;
if (rotationAngle > 0) {
CGSize rotatedSize;
float angle = Convert.ToSingle ((Math.PI / 180) * rotationAngle);
using (UIView rotatedViewBox = new UIView (new CGRect (0, 0, originalImage.Size.Width, originalImage.Size.Height))) {
CGAffineTransform t = CGAffineTransform.MakeRotation (angle);
rotatedViewBox.Transform = t;
rotatedSize = rotatedViewBox.Frame.Size;
UIGraphics.BeginImageContext (rotatedSize);
CGContext context = UIGraphics.GetCurrentContext ();
context.TranslateCTM (rotatedSize.Width / 2, rotatedSize.Height / 2);
context.RotateCTM (angle);
context.ScaleCTM ((nfloat)1.0, -(nfloat)1.0);
context.DrawImage (new CGRect (-originalImage.Size.Width / 2, -originalImage.Size.Height / 2, originalImage.Size.Width, originalImage.Size.Height), originalImage.CGImage);
rotatedImage = UIGraphics.GetImageFromCurrentImageContext ();
UIGraphics.EndImageContext ();
}
}
return rotatedImage;
}
Here is what i get :
Rotated image
And original image :
original image
update : little code change
rotate image using CGContextDrawImage
CGImage imgRef = originalImage.CGImage;
float width = imgRef.Width;
float height = imgRef.Height;
CGAffineTransform transform = CGAffineTransform.MakeIdentity();
RectangleF bounds = new RectangleF(0, 0, width, height);
float angle = Convert.ToSingle((rotationAngle / 180f) * Math.PI);
transform = CGAffineTransform.MakeRotation(angle);
UIGraphics.BeginImageContext(bounds.Size);
CGContext context = UIGraphics.GetCurrentContext();
context.TranslateCTM(width / 2, height / 2);
context.SaveState();
context.ConcatCTM(transform);
context.SaveState();
context.ConcatCTM(CGAffineTransform.MakeScale(1.0f, -1.0f));
context.DrawImage(new RectangleF(-width / 2, -height / 2, width, height), imgRef);
context.RestoreState();
UIImage imageCopy = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();

Painting on CGContextRef on Retina display

In my application i have one main view (with its UIGraphicsGetCurrentContext()) and several other CGContextRef canvases. All of them are the same size, the size of the screen - without scale factor.
I draw on the secondary canvases and at the end use CGBitmapContextCreateImage on the secondary canvases and CGContextDrawImage to draw on the main canvas.
On retina displays the result is poor and all lines look pixelated.
How should i handle painting on retina displays?
The following is the code i use to draw a context to the main context:
void CocoaGraphicsContext::drawContext(GraphicsContext* context, double x, double y, Box* cropBox, Matrix3D* matrix)
{
CGAffineTransform matInverted;
if (matrix != NULL)
{
CGAffineTransform transMat = CGAffineTransformMake(matrix->vx1, matrix->vy1, matrix->vx2, matrix->vy2, matrix->tx, matrix->ty);
matInverted = CGAffineTransformInvert(transMat);
CGContextConcatCTM(_cgContext, transMat);
}
CGImageRef cgImage = CGBitmapContextCreateImage(((CocoaGraphicsContext*)context)->cgContext());
CGContextSaveGState(_cgContext);
CGContextSetInterpolationQuality(_cgContext, kCGInterpolationNone);
bool shouldCrop = ((cropBox != NULL) && (cropBox->valid()));
if (shouldCrop)
{
CGContextClipToRect(_cgContext, CGRectMake(cropBox->x(), cropBox->y(), cropBox->width(), cropBox->height()));
}
CGContextDrawImage(_cgContext, CGRectMake(x, y, context->width(), context->height()), cgImage);
CGContextRestoreGState(_cgContext);
CGImageRelease(cgImage);
if (matrix != NULL)
{
CGContextConcatCTM(_cgContext, matInverted);
}
}
The main context is taken using UIGraphicsGetCurrentContext() from a UIView class
---- EDITED ---
Initialization code:
void CocoaBitmapContext::init(double width, double height, unsigned int* data)
{
int bytesPerRow = width * 2 * VT_BITMAP_BYTES_PER_PIXEL;
if (data == NULL)
{
_dataOwner = true;
_data = (unsigned int*)malloc(bytesPerRow * height * 2);
}
else
{
_dataOwner = false;
_data = data;
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
_cgContext = CGBitmapContextCreate(_data,
width * 2,
height * 2,
VT_BITMAP_BITS_PER_COMPONENT,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGContextConcatCTM(_cgContext, CGAffineTransformMakeScale([UIScreen mainScreen].scale, [UIScreen mainScreen].scale));
CGContextSetShouldAntialias(_cgContext, false);
CGContextSetRGBStrokeColor(_cgContext, 0.0, 0.0, 0.0, 0.0);
CGColorSpaceRelease(colorSpace);
}
Bitmap sizes are specified in pixels where as sizes on the screen are in points. You can get the screen's scale factor using [UIScreen mainScreen].scale. That is how many pixels are in a point. For devices with retina displays, this value will be 2. You will need to scale the size of your canvas by that factor. You should also concatenate a scale transform immediately after you create the context. When you draw the images, you should still use the screen bounds as the destination rectangle (the scale transform takes care of the scaling).
CGSize canvasSize = [UIScreen mainScreen].bounds.size;
CGFloat scale = [UIScreen mainScreen].scale;
canvasSize.width *= scale;
canvasSize.height *= scale;
UIGraphicsBeginImageContext(canvasSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextConcatCTM(context, CGAffineTransformMakeScale(scale, scale));
...
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
If you are going to display it in an image view, the image view should fill the screen and it's contentMode should be UIViewContentModeScaleToFill.

iOS - CoreImage - Add an effect to partial of image

I just have a look on CoreImage framework on iOS 5, found that it's easy to add an effect to whole image.
I wonder if possible to add an effect on special part of image (a rectangle). for example add gray scale effect on partial of image/
I look forward to your help.
Thanks,
Huy
Watch session 510 from the WWDC 2012 videos. They present a technique how to apply a mask to a CIImage. You need to learn how to chain the filters together. In particular take a look at:
CICrop, CILinearGradient, CIRadialGradient (could be used to create the mask)
CISourceOverCompositing (put mask images together)
CIBlendWithMask (create final image)
The filters are documented here:
https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html
Your best bet would be to copy the CIImage (so you now have two), crop the copied CIImage to the rect you want to effect, perform the effect on that cropped version, then use an overlay effect to create a new CIImage based on the two older CIImages.
It seems like a lot of effort, but when you understand all of this is being set up as a bunch of GPU shaders it makes a lot more sense.
typedef enum {
ALPHA = 0,
BLUE = 1,
GREEN = 2,
RED = 3
} PIXELS;
- (UIImage *)convertToGrayscale:(UIImage *) originalImage inRect: (CGRect) rect{
CGSize size = [originalImage size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [originalImage CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
if(x > rect.origin.x && y > rect.origin.y && x < rect.origin.x + rect.size.width && y < rect.origin.y + rect.size.height) {
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray in your rect
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
You can test it in a UIImageView:
imageview.image = [self convertToGrayscale:imageview.image inRect:CGRectMake(50, 50, 100, 100)];

Resources