Masking an image using bezierpath with image's full resolution - ios

Hi, I have a path (shape) and a high-resolution image. I make the high res image to be AspectFit inside the view on which I draw the path and I want to mask the image with the path but at the full resolution of the image, not at the resolution in which we see the path. The problem, It works perfectly when I don't upscale them up for high-resolution masking but when I do, everything is messed up. The mask gets stretched and the origins don't make sense.
All I want is to be able to upscale the path with the same aspect ratio of the image (at the full resolution of the image) and position it correctly so It can mask the high res image properly.
I've tried this:
Masking CGContext with a CGPathRef?
and this
Creating mask with CGImageMaskCreate is all black (iphone)
and this
Clip UIImage to UIBezierPath (not masking)
none of which works correctly when I try to mask a high quality image (bigger than screen resolution)
EDIT I posted a working project to show the problem between normal quality masking (at screen's resolution) and high quality masking (at image's resolution) on github. I'd really appreciate any help.
https://github.com/Reza-Abdolahi/HighResMasking

If I understand your question correctly:
You have an image view containing an image that may have been scaled down (or even scaled up) using UIViewContentModeScaleAspectFit.
You have a bezier path whose points are in the geometry (coordinate system) of that image view.
And now you want to create a copy of the image, at its original resolution, masked by the bezier path.
We can think of the image as having its own geometry, with the origin at the top left corner of the image and one unit along each axis being one point. So what we need to do is:
Create a graphics renderer big enough to draw the image into without scaling. The geometry of this renderer is the image's geometry.
Transform the bezier path from the view geometry to the renderer geometry.
Apply the transformed path to the renderer's clip region.
Draw the image (untransformed) into the renderer.
Step 2 is the hard one, because we have to come up with the correct CGAffineTransform. In an aspect-fit scenario, the transform needs to not only scale the image, but possibly translate it along either the x axis or the y axis (but not both). But let's be more general and support other UIViewContentMode settings. Here's a category that lets you ask a UIImageView for the transform that converts points in the view's geometry to points in the image's geometry:
#implementation UIImageView (ImageGeometry)
/**
* Return a transform that converts points in my geometry to points in the
* image's geometry. The origin of the image's geometry is at its upper
* left corner, and one unit along each axis is one point in the image.
*/
- (CGAffineTransform)imageGeometryTransform {
CGRect viewBounds = self.bounds;
CGSize viewSize = viewBounds.size;
CGSize imageSize = self.image.size;
CGFloat xScale = imageSize.width / viewSize.width;
CGFloat yScale = imageSize.height / viewSize.height;
CGFloat tx, ty;
switch (self.contentMode) {
case UIViewContentModeScaleToFill: tx = 0; ty = 0; break;
case UIViewContentModeScaleAspectFit:
if (xScale > yScale) { tx = 0; ty = 0.5; yScale = xScale; }
else if (xScale < yScale) { tx = 0.5; ty = 0; xScale = yScale; }
else { tx = 0; ty = 0; }
break;
case UIViewContentModeScaleAspectFill:
if (xScale < yScale) { tx = 0; ty = 0.5; yScale = xScale; }
else if (xScale > yScale) { tx = 0.5; ty = 0; xScale = yScale; }
else { tx = 0; ty = 0; imageSize = viewSize; }
break;
case UIViewContentModeCenter: tx = 0.5; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeTop: tx = 0.5; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeBottom: tx = 0.5; ty = 1; xScale = yScale = 1; break;
case UIViewContentModeLeft: tx = 0; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeRight: tx = 1; ty = 0.5; xScale = yScale = 1; break;
case UIViewContentModeTopLeft: tx = 0; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeTopRight: tx = 1; ty = 0; xScale = yScale = 1; break;
case UIViewContentModeBottomLeft: tx = 0; ty = 1; xScale = yScale = 1; break;
case UIViewContentModeBottomRight: tx = 1; ty = 1; xScale = yScale = 1; break;
default: return CGAffineTransformIdentity; // Mode not supported by UIImageView.
}
tx *= (imageSize.width - xScale * (viewBounds.origin.x + viewSize.width));
ty *= (imageSize.height - yScale * (viewBounds.origin.y + viewSize.height));
CGAffineTransform transform = CGAffineTransformMakeTranslation(tx, ty);
transform = CGAffineTransformScale(transform, xScale, yScale);
return transform;
}
#end
Armed with this, we can write the code that masks the image. In my test app, I have a subclass of UIImageView named PathEditingView that handles the bezier path editing. So my view controller creates the masked image like this:
- (UIImage *)maskedImage {
UIImage *image = self.pathEditingView.image;
UIGraphicsImageRendererFormat *format = [[UIGraphicsImageRendererFormat alloc] init];
format.scale = image.scale;
format.prefersExtendedRange = image.imageRendererFormat.prefersExtendedRange;
format.opaque = NO;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:image.size format:format];
return [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
UIBezierPath *path = [self.pathEditingView.path copy];
[path applyTransform:self.pathEditingView.imageGeometryTransform];
CGContextRef gc = UIGraphicsGetCurrentContext();
CGContextAddPath(gc, path.CGPath);
CGContextClip(gc);
[image drawAtPoint:CGPointZero];
}];
}
And it looks like this:
Of course it's hard to tell that the output image is full-resolution. Let's fix that by cropping the output image to the bounding box of the bezier path:
- (UIImage *)maskedAndCroppedImage {
UIImage *image = self.pathEditingView.image;
UIBezierPath *path = [self.pathEditingView.path copy];
[path applyTransform:self.pathEditingView.imageGeometryTransform];
CGRect pathBounds = CGPathGetPathBoundingBox(path.CGPath);
UIGraphicsImageRendererFormat *format = [[UIGraphicsImageRendererFormat alloc] init];
format.scale = image.scale;
format.prefersExtendedRange = image.imageRendererFormat.prefersExtendedRange;
format.opaque = NO;
UIGraphicsImageRenderer *renderer = [[UIGraphicsImageRenderer alloc] initWithSize:pathBounds.size format:format];
return [renderer imageWithActions:^(UIGraphicsImageRendererContext * _Nonnull rendererContext) {
CGContextRef gc = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(gc, -pathBounds.origin.x, -pathBounds.origin.y);
CGContextAddPath(gc, path.CGPath);
CGContextClip(gc);
[image drawAtPoint:CGPointZero];
}];
}
Masking and cropping together look like this:
You can see in this demo that the output image has much more detail than was visible in the input view, because it was generated at the full resolution of the input image.

As the secondary answer, I made it work with this code and for better understanding, you can get the working project on github here as well to see if it works on all cases or not.
my github project :
https://github.com/Reza-Abdolahi/HighResMasking
The part of code that solved the problem:
-(UIImage*)highResolutionMasking{
NSLog(#"///High quality (Image resolution) masking///////////////////////////////////////////////////");
//1.Rendering the path into an image with the size of _targetBound (which is the size of a device screen sized view in which the path is drawn.)
CGFloat aspectRatioOfImageBasedOnHeight = _highResolutionImage.size.height/ _highResolutionImage.size.width;
CGFloat aspectRatioOfTargetBoundBasedOnHeight = _targetBound.size.height/ _targetBound.size.width;
CGFloat pathScalingFactor = 0;
if ((_highResolutionImage.size.height >= _targetBound.size.height)||
(_highResolutionImage.size.width >= _targetBound.size.width)) {
//Then image is bigger than targetBound
if ((_highResolutionImage.size.height<=_highResolutionImage.size.width)) {
//The image is Horizontal
CGFloat newWidthForTargetBound =_highResolutionImage.size.width;
CGFloat ratioOfHighresImgWidthToTargetBoundWidth = (_highResolutionImage.size.width/_targetBound.size.width);
CGFloat newHeightForTargetBound = _targetBound.size.height* ratioOfHighresImgWidthToTargetBoundWidth;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.width/_targetBound.size.width;
}else if((_highResolutionImage.size.height > _highResolutionImage.size.width)&&
(aspectRatioOfImageBasedOnHeight <= aspectRatioOfTargetBoundBasedOnHeight)){
//The image is Vertical but has smaller aspect ratio (based on height) than targetBound
CGFloat newWidthForTargetBound =_highResolutionImage.size.width;
CGFloat ratioOfHighresImgWidthToTargetBoundWidth = (_highResolutionImage.size.width/_targetBound.size.width);
CGFloat newHeightForTargetBound = _targetBound.size.height* ratioOfHighresImgWidthToTargetBoundWidth;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.width/_targetBound.size.width;
}else if((_highResolutionImage.size.height > _highResolutionImage.size.width)&&
(aspectRatioOfImageBasedOnHeight > aspectRatioOfTargetBoundBasedOnHeight)){
CGFloat newHeightForTargetBound =_highResolutionImage.size.height;
CGFloat ratioOfHighresImgHeightToTargetBoundHeight = (_highResolutionImage.size.height/_targetBound.size.height);
CGFloat newWidthForTargetBound = _targetBound.size.width* ratioOfHighresImgHeightToTargetBoundHeight;
_bigTargetBound = CGRectMake(0, 0, newWidthForTargetBound, newHeightForTargetBound);
pathScalingFactor = _highResolutionImage.size.height/_targetBound.size.height;
}else{
//Do nothing
}
}else{
//Then image is smaller than targetBound
_bigTargetBound = _imageRect;
pathScalingFactor =1;
}
CGSize correctedSize = CGSizeMake(_highResolutionImage.size.width *_scale,
_highResolutionImage.size.height *_scale);
_bigImageRect= AVMakeRectWithAspectRatioInsideRect(correctedSize,_bigTargetBound);
//Scaling path
CGAffineTransform scaleTransform = CGAffineTransformIdentity;
scaleTransform = CGAffineTransformScale(scaleTransform, pathScalingFactor, pathScalingFactor);
CGPathRef scaledCGPath = CGPathCreateCopyByTransformingPath(_examplePath.CGPath,&scaleTransform);
//Render scaled path into image
UIGraphicsBeginImageContextWithOptions(_bigTargetBound.size, NO, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath (context, scaledCGPath);
CGContextSetFillColorWithColor (context, [UIColor redColor].CGColor);
CGContextSetStrokeColorWithColor (context, [UIColor redColor].CGColor);
CGContextDrawPath (context, kCGPathFillStroke);
UIImage * pathImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"High res pathImage.size: %#",NSStringFromCGSize(pathImage.size));
//Cropping it from targetBound into imageRect
_maskImage = [self cropThisImage:pathImage toRect:_bigImageRect];
NSLog(#"High res _croppedRenderedPathImage.size: %#",NSStringFromCGSize(_maskImage.size));
//Masking the high res image with my mask image which both have the same size now.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [_maskImage CGImage];
CGContextRef myContext = CGBitmapContextCreate (NULL, _highResolutionImage.size.width, _highResolutionImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if (myContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = _maskImage.size.width/ _highResolutionImage.size.width;
if(ratio * _highResolutionImage.size.height < _maskImage.size.height) {
ratio = _maskImage.size.height/ _highResolutionImage.size.height;
}
CGRect rectForMask = {{0, 0}, {_maskImage.size.width, _maskImage.size.height}};
CGRect rectForImageDrawing = {{-((_highResolutionImage.size.width*ratio)-_maskImage.size.width)/2 , -((_highResolutionImage.size.height*ratio)-_maskImage.size.height)/2},
{_highResolutionImage.size.width*ratio, _highResolutionImage.size.height*ratio}};
CGContextClipToMask(myContext, rectForMask, maskImageRef);
CGContextDrawImage(myContext, rectForImageDrawing, _highResolutionImage.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(myContext);
CGContextRelease(myContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
-(UIImage *)cropThisImage:(UIImage*)image toRect:(CGRect)rect{
CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *croppedImage = [UIImage imageWithCGImage:subImage];
CGImageRelease(subImage);
return croppedImage;
}

Related

iOS/Prevent image aspect ratio from changing when resizing

I am using the code below first to create an image thumb (using a category) and then tailor the thumb to the VC in question, for example, make it round.
Somehow, the aspect ratio of images is not getting preserved with some getting squashed vertically...so a face looks like a sideways oval while others get squashed horizontally, so a round ball looks like an upright football. In the code for individual VCs, I am using UIViewContentModeScaleAspectFill and setting clip to bounds to yes but to no avail. Also tried checking these in Storybord but still no luck.
Can anyone see what might be wrong with code below?
//code in viewDidLoad
UIImage *thumbnail = [selectedImage createThumbnailToFillSize:CGSizeMake(side, side)];
//see createThumbNail method below
self.contactImage.image = thumbnail;
//image has been selected and trimmed to thumb. Now format it
CGSize itemSize = CGSizeMake(64, 64);
UIGraphicsBeginImageContextWithOptions(itemSize, NO, UIScreen.mainScreen.scale);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
self.contactImage.contentMode = UIViewContentModeScaleAspectFill;
self.contactImage.clipsToBounds = YES;
[self.contactImage.image drawInRect:imageRect];
self.contactImage.image = UIGraphicsGetImageFromCurrentImageContext();
self.contactImage.layer.cornerRadius=60.0;
UIGraphicsEndImageContext();
//Generic category to create thumb
-(UIImage *) createThumbnailToFillSize:(CGSize)size
{
CGSize mainImageSize = size;
UIImage *thumb;
CGFloat widthScaler = size.width / mainImageSize.width;
CGFloat heightScaler = size.height / mainImageSize.height;
CGSize repositionedMainImageSize = mainImageSize;
CGFloat scaleFactor;
// Determine if we should shrink based on width or hight
if(widthScaler > heightScaler)
{
// calculate based on width scaler
scaleFactor = widthScaler;
repositionedMainImageSize.height = ceil(size.height / scaleFactor);
}
else {
// calculate based on height scaler
scaleFactor = heightScaler;
repositionedMainImageSize.width = ceil(size.width / heightScaler);
}
UIGraphicsBeginImageContext(size);
CGFloat xInc = ((repositionedMainImageSize.width-mainImageSize.width) / 2.f) *scaleFactor;
CGFloat yInc = ((repositionedMainImageSize.height-mainImageSize.height) / 2.f) *scaleFactor;
[self drawInRect:CGRectMake(xInc, yInc, mainImageSize.width * scaleFactor, mainImageSize.height * scaleFactor)];
thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumb;
}

Cropping UIimage in IOS

I am very new to IOS and the first task given to me is Image cropping. means If I am using an image as banner image and the given frame size is more then or smaller thn the size of the image . my code should automatically resize image in respective aspect ratio of the image and then set image in that frame.
I have done so much R&D and after that i have written code.
-(UIImage *)MyScaleNEwMethodwithImage:(UIImage *)image andframe:(CGRect)frame{
float bmHeight= image.size.height;
float bmWidth= image.size.width;
UIImage *RecievedImage=image;
if (bmHeight>bmWidth) {
float ratio = frame.size.height/frame.size.width;
float newbmHeight = ratio*bmWidth;
float cropedHeight= (bmHeight-newbmHeight)/2;
if (cropedHeight<0) {
float ratio1= frame.size.width/frame.size.height;
float newbmHeight1= (ratio1*bmHeight);
float cropedimageHeight1 = (bmWidth- newbmHeight1)/2;
CGRect cliprect = CGRectMake(cropedimageHeight1, 0,bmWidth-cropedimageHeight1,bmHeight);
CGImageRef imref = CGImageCreateWithImageInRect([image CGImage],cliprect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
return newSubImage;
}
else
{
CGRect cliprect = CGRectMake(0,cropedHeight,bmWidth,bmHeight-cropedHeight);
CGImageRef imref = CGImageCreateWithImageInRect([image CGImage],cliprect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
return newSubImage;
}
}
else
{
float ratio = frame.size.height/frame.size.width;
float newbmHeight = ratio*bmHeight;
float cropedHeight= (bmHeight-newbmHeight)/4;
if (cropedHeight<0) {
float ratio1= frame.size.width/frame.size.height;
float newbmHeight1= (ratio1*bmWidth);
float cropedimageHeight1 = (bmWidth- newbmHeight1)/2;
UIImageView *DummyImage=[[UIImageView alloc]initWithFrame:CGRectMake(0,cropedimageHeight1,bmWidth,(bmHeight-cropedimageHeight1))];
[DummyImage setImage:RecievedImage];
CGImageRef imageRef = CGImageCreateWithImageInRect([DummyImage.image CGImage], CGRectMake(0,cropedimageHeight1/2,bmWidth/2,(bmHeight-cropedimageHeight1)/2));
// or use the UIImage wherever you like
[DummyImage setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
UIImage *ScaledImage=[UIImage imageWithCGImage:imageRef];
return ScaledImage;
} else {
UIImageView *DummyImage=[[UIImageView alloc]initWithFrame:CGRectMake(cropedHeight,0,bmWidth-cropedHeight,bmHeight)];
[DummyImage setImage:RecievedImage];
CGImageRef imageRef = CGImageCreateWithImageInRect([DummyImage.image CGImage],CGRectMake(cropedHeight,2*cropedHeight,(bmWidth-cropedHeight),bmHeight/2));
// or use the UIImage wherever you like
[DummyImage setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
UIImage *ScaledImage=[UIImage imageWithCGImage:imageRef];
return ScaledImage;
}
}
}
In my frame i am getting required image but when screen changes i can see full image. i want to cut the unwanted image.
Thise piece of code may help you out
-(CGRect) cropImage:(CGRect)frame
{
NSAssert(self.contentMode == UIViewContentModeScaleAspectFit, #"content mode should be aspect fit");
CGFloat wScale = self.bounds.size.width / self.image.size.width;
CGFloat hScale = self.bounds.size.height / self.image.size.height;
float x, y, w, h, offset;
if (wScale<hScale) {
offset = (self.bounds.size.height - (self.image.size.height*widthScale))/2;
x = frame.origin.x / wScale;
y = (frame.origin.y-offset) / wScale;
w = frame.size.width / wScale;
h = frame.size.height / wScale;
} else {
offset = (self.bounds.size.width - (self.image.size.width*heightScale))/2;
x = (frame.origin.x-offset) / hScale;
y = frame.origin.y / hScale;
w = frame.size.width / hScale;
h = frame.size.height / hScale;
}
return CGRectMake(x, y, w, h);
}
The code which I had referred to crop the image as per aspect ratio is :
typedef enum {
MGImageResizeCrop,
MGImageResizeCropStart,
MGImageResizeCropEnd,
MGImageResizeScale
} MGImageResizingMethod;
- (UIImage *)imageToFitSize:(CGSize)fitSize method:(MGImageResizingMethod)resizeMethod
{
float imageScaleFactor = 1.0;
#if __IPHONE_OS_VERSION_MAX_ALLOWED >= 40000
if ([self respondsToSelector:#selector(scale)]) {
imageScaleFactor = [self scale];
}
#endif
float sourceWidth = [self size].width * imageScaleFactor;
float sourceHeight = [self size].height * imageScaleFactor;
float targetWidth = fitSize.width;
float targetHeight = fitSize.height;
BOOL cropping = !(resizeMethod == MGImageResizeScale);
// Calculate aspect ratios
float sourceRatio = sourceWidth / sourceHeight;
float targetRatio = targetWidth / targetHeight;
// Determine what side of the source image to use for proportional scaling
BOOL scaleWidth = (sourceRatio <= targetRatio);
// Deal with the case of just scaling proportionally to fit, without cropping
scaleWidth = (cropping) ? scaleWidth : !scaleWidth;
// Proportionally scale source image
float scalingFactor, scaledWidth, scaledHeight;
if (scaleWidth) {
scalingFactor = 1.0 / sourceRatio;
scaledWidth = targetWidth;
scaledHeight = round(targetWidth * scalingFactor);
} else {
scalingFactor = sourceRatio;
scaledWidth = round(targetHeight * scalingFactor);
scaledHeight = targetHeight;
}
float scaleFactor = scaledHeight / sourceHeight;
// Calculate compositing rectangles
CGRect sourceRect, destRect;
if (cropping) {
destRect = CGRectMake(0, 0, targetWidth, targetHeight);
float destX, destY;
if (resizeMethod == MGImageResizeCrop) {
// Crop center
destX = round((scaledWidth - targetWidth) / 2.0);
destY = round((scaledHeight - targetHeight) / 2.0);
} else if (resizeMethod == MGImageResizeCropStart) {
// Crop top or left (prefer top)
if (scaleWidth) {
// Crop top
destX = 0.0;
destY = 0.0;
} else {
// Crop left
destX = 0.0;
destY = round((scaledHeight - targetHeight) / 2.0);
}
} else if (resizeMethod == MGImageResizeCropEnd) {
// Crop bottom or right
if (scaleWidth) {
// Crop bottom
destX = round((scaledWidth - targetWidth) / 2.0);
destY = round(scaledHeight - targetHeight);
} else {
// Crop right
destX = round(scaledWidth - targetWidth);
destY = round((scaledHeight - targetHeight) / 2.0);
}
}
sourceRect = CGRectMake(destX / scaleFactor, destY / scaleFactor,
targetWidth / scaleFactor, targetHeight / scaleFactor);
} else {
sourceRect = CGRectMake(0, 0, sourceWidth, sourceHeight);
destRect = CGRectMake(0, 0, scaledWidth, scaledHeight);
}
// Create appropriately modified image.
UIImage *image = nil;
#if __IPHONE_OS_VERSION_MAX_ALLOWED >= 40000
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 4.0) {
UIGraphicsBeginImageContextWithOptions(destRect.size, NO, 0.0); // 0.0 for scale means "correct scale for device's main screen".
CGImageRef sourceImg = CGImageCreateWithImageInRect([self CGImage], sourceRect); // cropping happens here.
image = [UIImage imageWithCGImage:sourceImg scale:0.0 orientation:self.imageOrientation]; // create cropped UIImage.
[image drawInRect:destRect]; // the actual scaling happens here, and orientation is taken care of automatically.
CGImageRelease(sourceImg);
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
#endif
if (!image) {
// Try older method.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, fitSize.width, fitSize.height, 8, (fitSize.width * 4),
colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef sourceImg = CGImageCreateWithImageInRect([self CGImage], sourceRect);
CGContextDrawImage(context, destRect, sourceImg);
CGImageRelease(sourceImg);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
image = [UIImage imageWithCGImage:finalImage];
CGImageRelease(finalImage);
}
return image;
}
Check if this helps you.

Drawing a pattern along a path

My goal is to take a pattern like this
and draw it repeatedly along a circular path to produce something similar to this image:
I found several code examples in other questions and an full demo project here but the result is this:
I think the difference between the two images is obvious, but I find it hard to describe (pardon my lack of graphics vocabulary). The result seems to be tiling without the desired rotation/deformation of the pattern. I think I can live with the lack of deformation, but the rotation is key. I think that perhaps the draw callback could/should be modified to include a rotation, but can't figure out how to retrieve/determine the angle at the point of the callback.
I considered an approach where I manually deformed/rotated the image and drew it several times around a centerpoint to achieve the effect I want, but I believe that CoreGraphics could do it with more efficiency and with less code.
Any suggestions about how to achieve the result I want would be appreciated.
Here is the relevant code from the ChalkCircle project:
const float kPatternWidth = 8;
const float kPatternHeight = 8;
void DrawPatternCellCallback(void *info, CGContextRef cgContext)
{
UIImage *patternImage = [UIImage imageNamed:#"chalk_brush.png"];
CGContextDrawImage(cgContext, CGRectMake(0, 0, kPatternWidth, kPatternHeight), patternImage.CGImage);
}
- (void)drawRect:(CGRect)rect {
float startDeg = 0; // where to start drawing
float endDeg = 360; // where to stop drawing
int x = self.center.x;
int y = self.center.y;
int radius = (self.bounds.size.width > self.bounds.size.height ? self.bounds.size.height : self.bounds.size.width) / 2 * 0.8;
CGContextRef ctx = UIGraphicsGetCurrentContext();
const CGRect patternBounds = CGRectMake(0, 0, kPatternWidth, kPatternHeight);
const CGPatternCallbacks kPatternCallbacks = {0, DrawPatternCellCallback, NULL};
CGAffineTransform patternTransform = CGAffineTransformIdentity;
CGPatternRef strokePattern = CGPatternCreate(
NULL,
patternBounds,
patternTransform,
kPatternWidth, // horizontal spacing
kPatternHeight,// vertical spacing
kCGPatternTilingNoDistortion,
true,
&kPatternCallbacks);
CGFloat color1[] = {1.0, 1.0, 1.0, 1.0};
CGColorSpaceRef patternSpace = CGColorSpaceCreatePattern(NULL);
CGContextSetStrokeColorSpace(ctx, patternSpace);
CGContextSetStrokePattern(ctx, strokePattern, color1);
CGContextSetLineWidth(ctx, 4.0);
CGContextMoveToPoint(ctx, x, y - radius);
CGContextAddArc(ctx, x, y, radius, (startDeg-90)*M_PI/180.0, (endDeg-90)*M_PI/180.0, 0);
CGContextClosePath(ctx);
CGContextDrawPath(ctx, kCGPathStroke);
CGPatternRelease(strokePattern);
strokePattern = NULL;
CGColorSpaceRelease(patternSpace);
patternSpace = NULL;
}
.SOLUTION FROM SAM
I modified sam's solution to handle non-square patterns, center the result, and remove hard coded numbers by calculating them from the passed in image:
#define MAX_CIRCLE_DIAMETER 290.0f
#define OVERLAP 1.5f
-(void) drawInCircle:(UIImage *)patternImage
{
int numberOfImages = 12;
float diameter = (MAX_CIRCLE_DIAMETER * numberOfImages * patternImage.size.width) / ( (2.0 * M_PI * patternImage.size.height) + (numberOfImages * patternImage.size.width));
//get the radius, circumference and image size
CGRect replicatorFrame = CGRectMake((320-diameter)/2.0f, 60.0f, diameter, diameter);
float radius = diameter/2;
float circumference = M_PI * diameter;
float imageWidth = circumference/numberOfImages;
float imageHeight = imageWidth * patternImage.size.height / patternImage.size.width;
//create a replicator layer and add it to our view
CAReplicatorLayer *replicator = [CAReplicatorLayer layer];
replicator.frame = replicatorFrame;
[self.view.layer addSublayer:replicator];
//configure the replicator
replicator.instanceCount = numberOfImages;
//apply a rotation transform for each instance
CATransform3D transform = CATransform3DIdentity;
transform = CATransform3DRotate(transform, M_PI / (numberOfImages/2), 0, 0, 1);
replicator.instanceTransform = transform;
//create a sublayer and place it inside the replicator
CALayer *layer = [CALayer layer];
//the frame places the layer in the middle of the replicator layer and on the outside of
//the replicator layer so that the the size is accurate relative to the circumference
layer.frame = CGRectMake(radius - (imageWidth/2.0) - (OVERLAP/2.0), -imageHeight/2.0, imageWidth+OVERLAP, imageHeight);
layer.anchorPoint = CGPointMake(0.5, 1);
[replicator addSublayer:layer];
//apply a perspective transform to the layer
CATransform3D perspectiveTransform = CATransform3DIdentity;
perspectiveTransform.m34 = 1.0f / -radius;
perspectiveTransform = CATransform3DRotate(perspectiveTransform, (M_PI_4), -1, 0, 0);
layer.transform = perspectiveTransform;
//set the image as the layer's contents
layer.contents = (__bridge id)patternImage.CGImage;
}
Using Core Animation's replicator layer, I managed to create this result:
I think it's close to what your looking for. In this example all the images are square with a 3d X rotation applied to each of them.
#import <QuartzCore/QuartzCore.h>
//set the number of images and the diameter (width) of the circle
int numberOfImages = 30;
float diameter = 450.0f;
//get the radius, circumference and image size
float radius = diameter/2;
float circumference = M_PI * diameter;
float imageSize = circumference/numberOfImages;
//create a replicator layer and add it to our view
CAReplicatorLayer *replicator = [CAReplicatorLayer layer];
replicator.frame = CGRectMake(100.0f, 100.0f, diameter, diameter);
[self.view.layer addSublayer:replicator];
//configure the replicator
replicator.instanceCount = numberOfImages;
//apply a rotation transform for each instance
CATransform3D transform = CATransform3DIdentity;
transform = CATransform3DRotate(transform, M_PI / (numberOfImages/2), 0, 0, 1);
replicator.instanceTransform = transform;
//create a sublayer and place it inside the replicator
CALayer *layer = [CALayer layer];
//the frame places the layer in the middle of the replicator layer and on the outside of the replicator layer so that the the size is accurate relative to the circumference
layer.frame = CGRectMake(radius - (imageSize/2), -imageSize/2, imageSize, imageSize);
layer.anchorPoint = CGPointMake(0.5, 1);
[replicator addSublayer:layer];
//apply a perspective transofrm to the layer
CATransform3D perspectiveTransform = CATransform3DIdentity;
perspectiveTransform.m34 = 1.0f / -radius;
perspectiveTransform = CATransform3DRotate(perspectiveTransform, (M_PI_4), -1, 0, 0);
layer.transform = perspectiveTransform;
//set the image as the layer's contents
layer.contents = (__bridge id)[UIImage imageNamed:#"WCR3Q"].CGImage;

How to crop a center square in a UIImage?

Sorry about this question, but I searched a lot of threads here in S.O. and I found nothing.
The question is: how to crop a center square in a UIImage?
I tried the code bellow but without success. The cropping happens, but in the upper-left corner.
-(UIImage*)imageCrop:(UIImage*)original
{
UIImage *ret = nil;
// This calculates the crop area.
float originalWidth = original.size.width;
float originalHeight = original.size.height;
float edge = fminf(originalWidth, originalHeight);
float posX = (originalWidth - edge) / 2.0f;
float posY = (originalHeight - edge) / 2.0f;
CGRect cropSquare = CGRectMake(posX, posY,
edge, edge);
// This performs the image cropping.
CGImageRef imageRef = CGImageCreateWithImageInRect([original CGImage], cropSquare);
ret = [UIImage imageWithCGImage:imageRef
scale:original.scale
orientation:original.imageOrientation];
CGImageRelease(imageRef);
return ret;
}
Adding more information, I'm using this 'crop' after a photo capture.
After some tests, I found something that worked.
// If orientation indicates a change to portrait.
if(original.imageOrientation == UIImageOrientationLeft ||
original.imageOrientation == UIImageOrientationRight)
{
cropSquare = CGRectMake(posY, posX,
edge, edge);
}
else
{
cropSquare = CGRectMake(posX, posY,
edge, edge);
}

Merge Two UIImage that are Rotated & Scaled

I'am doing an app something like this: You load a photo and you put images over it, like balloons, etc..
When I try to merge one of this over images with only resize it works fine. Like 10px more than it should be but no problem.
The problem comes when you rotate the image [UIImageView] it appears much bigger that the image its, I try allot of things and nothing. I leave the code. I hope someone could help.
Note: The image size its inside UIImageView, then multiplied it by the scale of the main image
- (UIImage *)mergeImage:(UIImageView *)mainImage withImageView:(UIImageView *)imageView {
UIImage *temp = imageView.image;
UIImage *tempMain = mainImage.image;
CGFloat mainScale = [self imageViewScaleFactor:mainImage];
CGFloat tempScale = 1/mainScale;
NSLog(#"%f", tempScale);
//Rotate UIIMAGE
UIGraphicsBeginImageContext(temp.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformTranslate(transform, temp.size.width/2, temp.size.height/2);
CGFloat angle = atan2(imageView.transform.b, imageView.transform.a);
transform = CGAffineTransformRotate(transform, angle);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
CGContextConcatCTM(ctx, transform);
// Draw the image into the context
CGContextDrawImage(ctx, CGRectMake(-temp.size.width/2, -temp.size.height/2, temp.size.width, temp.size.height), temp.CGImage);
// Get an image from the context
temp = [UIImage imageWithCGImage: CGBitmapContextCreateImage(ctx)];
NSLog(#"%f %f %f", mainScale, mainImage.frame.size.width, mainImage.frame.size.height);
UIGraphicsBeginImageContextWithOptions(tempMain.size, NO, 1.0f);
//Get imageView size & position
NSLog(#"%f %f %f %f", imageView.frame.origin.x, imageView.frame.origin.y, imageView.frame.size.width, imageView.frame.size.height);
CGFloat offsetX = 0;
CGFloat offsetY = -44;
if (tempMain.size.height > tempMain.size.width) {
offsetX = ((tempMain.size.width * mainScale) - 320)/2;
}else{
offsetY = ((tempMain.size.height * mainScale) - 416)/2;
offsetY -= 44;
}
CGFloat imageViewX = (imageView.frame.origin.x + offsetX) * tempScale;
CGFloat imageViewY = (imageView.frame.origin.y + offsetY) * tempScale;
CGFloat imageViewW = imageView.frame.size.width * tempScale;
CGFloat imageViewH = imageView.frame.size.height * tempScale;
CGRect tempRect = CGRectMake(imageViewX, imageViewY, imageViewW, imageViewH);
[tempMain drawAtPoint:CGPointZero];
[temp drawInRect:tempRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Thanks
This is the solution that works for me
Merging a previosly rotated by gesture UIImageView with another one. WYS is not WYG
I just take a photo to the main screen and then crop it to the size of the photo, its faster, and clean. and the resolution it ok if the apps runs in retina in a normal device isn't too good. And you need to prepare that code to work in retina & non-retina

Resources