I'm having a problem when it comes to clipping a photo to a custom UIBezierPath. Instead of displaying the area within the path, as it should, it displays another part of the photo instead (of different size and position than where it should be, but still the same shape as was drawn). Note I'd also like to keep the full quality of the original photo.
I've included photos and captions below to explain my problem in more depth. If anyone could give me another way to do such a thing I'll gladly start over.
Above is an illustration of the UIImage within a UIImageView, all within a UIView of which the CAShapeLayer that displays the UIBezierPath is a sublayer. For this example assume that the path in red was drawn by the user.
In this diagram is the CAShapeLayer and a graphics context created with the original image's size. How would I clip the context so that the result below is produced (please excuse the messiness of it)?
This is the result I'd like to be produced when all is said and done. Note I'd like it to still be the same size/quality as the original.
Here are some relevant portions of my code:
This clips an image to a path
-(UIImage *)maskImageToPath:(UIBezierPath *)path {
// Precondition: the path exists and is closed
assert(path != nil);
// Mask the image, keeping original size
UIGraphicsBeginImageContextWithOptions(self.size, NO, 0);
[path addClip];
[self drawAtPoint:CGPointZero];
// Extract the image
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return maskedImage;
}
This adds points to the UIBezierPath
- (void)drawClippingLine:(UIPanGestureRecognizer *)sender {
CGPoint nextPoint = [sender locationInView:self];
// If UIBezierPath *clippingPath is nil, initialize it.
// Otherwise, add another point to it.
if(!clippingPath) {
clippingPath = [UIBezierPath bezierPath];
[clippingPath moveToPoint:nextPoint];
}
else {
[clippingPath addLineToPoint:nextPoint];
}
UIGraphicsBeginImageContext(_image.size);
[clippingPath stroke];
UIGraphicsEndImageContext();
}
You are getting an incorrect crop because the UIImage is scaled to fit inside the UIImageView. Basically this means you have to translate the UIBezierPath coordinates to the correct coordinates within the UIImage. The easiest way to do this is to use a UIImageView category which will convert the points from one view (in this case the UIBezierPath, even though it's not really a view) to the correct points within the UIImageView.
You can see an example of such a category here. More specifically you will need to use the convertPointFromView: method within that category to convert each point in your UIBezierPath.
(Sorry for not writing the complete code, I'm typing on my phone)
Related
I want to make custom drawing so that i could convert it to image.
i have heard of UIBezierPath but donot know much about it, my purpose is to change color of it on basis of user's selection of color.
Create a CGGraphcisContext and get an image like this:
UIGraphicsBeginImageContextWithOptions(bounds.size, NO , [[UIScreen mainScreen] scale]);
// set the fill color (UIColor *)
[userSelectedColor setFill];
//create your path
UIBezierPath *path = ...
//fill the path with your color
[path fill];
UIImage *outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You might have to combine multiple paths to get your desired shape. First create the 'drop' with bezier paths. The path might look something like this:
//Create the top half of the circle
UIBezierPath *drop = [UIBezierPath bezierPathWithArcCenter:CGPointMake(CGRectGetWidth(bounds)*0.5f, CGRectGetWidth(bounds)*0.5f)
radius:CGRectGetWidth(bounds)*0.5f
startAngle:0
endAngle:DEGREES_TO_RADIANS(180)
clockwise:NO];
//Add the first half of the bottom part
[drop addCurveToPoint:CGPointMake(CGRectGetWidth(bounds)*0.5f,CGRectGetHeight(bounds))
controlPoint1:CGPointMake(CGRectGetWidth(bounds),CGRectGetWidth(bounds)*0.5f+CGRectGetHeight(bounds)*0.1f)]
controlPoint2:CGPointMake(CGRectGetWidth(bounds)*0.6f,CGRectGetHeight(bounds)*0.8f)];
//Add the second half of the bottom part beginning from the sharp corner
[drop addCurveToPoint:CGPointMake(0,CGRectGetWidth(bounds)*0.5f)
controlPoint1:CGPointMake(CGRectGetWidth(bounds)*0.4f,CGRectGetHeight(bounds)*0.8f)
controlPoint2:CGPointMake(0,CGRectGetWidth(bounds)*0.5f+CGRectGetHeight(bounds)*0.1f)];
[drop closePath];
Not entirely sure if this works since I couldn't test it right now. You might have to play with the controls points a bit. It could be that I made some error with the orientation.
I am drawing image on a custom UIView. On resizing the view, the drawing performance goes down and it starts lagging.
My image drawing code is below:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
UIBezierPath *bpath = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0, 0, width, height)];
CGContextAddPath(context, bpath.CGPath);
CGContextClip(context);
CGContextDrawImage(context, [self bounds], image.CGImage);
}
Is this approach correct?
You would be better using Instruments to find where the bottleneck is than asking on here.
However, what you will probably find is that every time the frame changes slightly the entire view will be redrawn.
If you're just using the drawRect to clip the view into an oval (I guess there's an image behind it or something) then you would be better off using a CAShapeLayer.
Create a CAShapeLayer and give it a CGPath then add it as a clipping layer to the view.layer.
Then you can change the path on the CAShapeLayer and it will update. You'll find (I think) that it performs much better too.
If your height and width are the same, you could just use a UIImageView instead of needing a custom view, and get the circular clipping by setting properties on the image view's layer. That approach draws nice and quickly.
Just set up a UIImageView (called "image" in my example) and then have your view controller do this once:
image.layer.cornerRadius = image.size.width / 2.0;
image.layer.masksToBounds = YES;
I am developing a game for the AppStore in Xcode's Sprite kit. I have some single colored shapes that I want to 'splice' together into one, and have the user be able to drag and drop this spliced image as they would with the single colored shapes.
In more detail: I have the following two images:
http://s1381.photobucket.com/user/shahmeen/media/CircleYellow_zps11feede7.png.html
http://s1381.photobucket.com/user/shahmeen/media/CircleRed_zps49eb1802.png.html
SKSpriteNode *spriteA;
spriteA = [SKSpriteNode spriteNodeWithImageNamed:#"CircleYellow"];
[self addChild:spriteA];
SKSpriteNode *spriteB;
spriteB = [SKSpriteNode spriteNodeWithImageNamed:#"CircleRed"];
[self addChild:spriteB];
I would like to have a third sprite, that looks as it does below (forgive me for my crude photoshop skills ... if the links aren't working, what I want to create is an image, spriteC, with the left half of it being the left half of spriteA and the right half of it being the right half of spriteB):
http://s1381.photobucket.com/user/shahmeen/media/CircleRed_zps9ca710cb.png.html
(some code that crops spriteA and spriteB and then)
SKSpriteNode *spriteC;
spriteC = (the output of spriteA and spriteB cropped and spliced together);
[self addChild:spriteC];
I know I can do something like this using SKShapeNodes with the simplicity of the objects above, but I intend to do this with much more complex figures. Also I don't think it is practical for me to load in several .pngs because I'll be getting into the several hundreds count with all the permutations. I'll be happy to clarify anything - thanks
I would create the final combination of the two images by code and make a SKSpriteNode use this combined image as texture. You can do it like follows, assuming the two images & the final one have the same size:
- (UIImage*)combineImage:(UIImage*)leftImage withImage:(UIImage*)rightImage{
CGSize finalImageSize = leftImage.size;
CGFloat scale = leftImage.scale;
UIGraphicsBeginImageContextWithOptions(finalImageSize, NO, scale);
CGContextRef context = UIGraphicsGetCurrentContext();
[leftImage drawAtPoint:CGPointZero];
CGContextClipToRect(context, CGRectMake(finalImageSize.width/2, 0, finalImageSize.width/2, finalImageSize.height)); //use clipToMask with a maskImage if you have some more complicated images
[rightImage drawAtPoint:CGPointZero];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
What's the easiest and least performance intensive way to achieve this?
Right now i have a UIBezierPath with over 60000 lines. I want to create an image from it that will later be moved around on-screen and stretched.
Just create a graphics context and draw your path into it, like this:
UIGraphicsBeginImageContextWithOptions(path.bounds.size, NO, 0.0); //size of the image, opaque, and scale (set to screen default with 0)
[path fill]; //or [path stroke]
UIImage *myImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If you're planning on moving the image around, you'll need to draw it into your view, or preferably a UIImageView that's a subview of that view. This is faster than redrawing your view every time the user moves his finger.
Hope this helps!
I'm creating an app that allows users to cut out part of an image. In order to do this, they'll create a bunch of UIBezierPaths to form the clipping path. My current setup is as follows:
A UIImageView displays the image they're cutting.
Above that UIImageView is a custom subclass of UIImageView that
performs the custom drawRect: methods for showing/updating the
UIBezierPaths that the user is adding.
When the user clicks the "Done" button, a new UIBezierPath object is created that incorporates all the individual paths created by the user by looping through the array they're stored in and calling appendPath: on itself. This new UIBezierPath then closes its path.
That's as far as I've gotten. I know UIBezierPath has an addClip method, but I can't figure out from the documentation how to use it.
In general, all examples I've seen for clipping directly use Core Graphics rather than the UIBezierPath wrapper. I realize that UIBezierPath has a CGPath property. So should I be using this at the time of clipping rather than the full UIBezierPath object?
Apple say not to subclass UIImageView, according to the UIImageView class reference. Thank you to #rob mayoff for pointing this out.
However, if you're implementing your own drawRect, start with your own UIView subclass. And, it's within drawRect that you use addClip. You can do this with a UIBezierPath without converting it to a CGPath.
- (void)drawRect:(CGRect)rect
{
// This assumes the clippingPath and image may be drawn in the current coordinate space.
[[self clippingPath] addClip];
[[self image] drawAtPoint:CGPointZero];
}
If you want to scale up or down to fill the bounds, you need to scale the graphics context. (You could also apply a CGAffineTransform to the clippingPath, but that is permanent, so you'd need to copy the clippingPath first.)
- (void)drawRect:(CGRect)rect
{
// This assumes the clippingPath and image are in the same coordinate space, and scales both to fill the view bounds.
if ([self image])
{
CGSize imageSize = [[self image] size];
CGRect bounds = [self bounds];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, bounds.size.width/imageSize.width, bounds.size.height/imageSize.height);
[[self clippingPath] addClip];
[[self image] drawAtPoint:CGPointZero];
}
}
This will scale the image separately on each axis. If you want to preserve its aspect ratio, you'll need to work out the overall scaling, and possibly translate it so it's centered or otherwise aligned.
Finally, all of this is relatively slow if your path gets drawn a lot. You will probably find it's faster to store the image in a CALayer, and mask that with a CAShapeLayer containing the path. Do not use the following methods except for testing. You will need to separately scale the image layer and the mask to make them line up. The advantage is that you can change the mask without the underlying image being rendered.
- (void) setImage:(UIImage *)image;
{
// This method should also store the image for later retrieval.
// Putting an image directly into a CALayer will stretch the image to fill the layer.
[[self layer] setContents:(id) [image CGImage]];
}
- (void) setClippingPath:(UIBezierPath *)clippingPath;
{
// This method should also store the clippingPath for later retrieval.
if (![[self layer] mask])
[[self layer] setMask:[CAShapeLayer layer]];
[(CAShapeLayer*) [[self layer] mask] setPath:[clippingPath CGPath]];
}
If you do make image clipping with layer masks work, you no longer need a drawRect method. Remove it for efficiency.