I'm creating an app that allows users to cut out part of an image. In order to do this, they'll create a bunch of UIBezierPaths to form the clipping path. My current setup is as follows:
A UIImageView displays the image they're cutting.
Above that UIImageView is a custom subclass of UIImageView that
performs the custom drawRect: methods for showing/updating the
UIBezierPaths that the user is adding.
When the user clicks the "Done" button, a new UIBezierPath object is created that incorporates all the individual paths created by the user by looping through the array they're stored in and calling appendPath: on itself. This new UIBezierPath then closes its path.
That's as far as I've gotten. I know UIBezierPath has an addClip method, but I can't figure out from the documentation how to use it.
In general, all examples I've seen for clipping directly use Core Graphics rather than the UIBezierPath wrapper. I realize that UIBezierPath has a CGPath property. So should I be using this at the time of clipping rather than the full UIBezierPath object?
Apple say not to subclass UIImageView, according to the UIImageView class reference. Thank you to #rob mayoff for pointing this out.
However, if you're implementing your own drawRect, start with your own UIView subclass. And, it's within drawRect that you use addClip. You can do this with a UIBezierPath without converting it to a CGPath.
- (void)drawRect:(CGRect)rect
{
// This assumes the clippingPath and image may be drawn in the current coordinate space.
[[self clippingPath] addClip];
[[self image] drawAtPoint:CGPointZero];
}
If you want to scale up or down to fill the bounds, you need to scale the graphics context. (You could also apply a CGAffineTransform to the clippingPath, but that is permanent, so you'd need to copy the clippingPath first.)
- (void)drawRect:(CGRect)rect
{
// This assumes the clippingPath and image are in the same coordinate space, and scales both to fill the view bounds.
if ([self image])
{
CGSize imageSize = [[self image] size];
CGRect bounds = [self bounds];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, bounds.size.width/imageSize.width, bounds.size.height/imageSize.height);
[[self clippingPath] addClip];
[[self image] drawAtPoint:CGPointZero];
}
}
This will scale the image separately on each axis. If you want to preserve its aspect ratio, you'll need to work out the overall scaling, and possibly translate it so it's centered or otherwise aligned.
Finally, all of this is relatively slow if your path gets drawn a lot. You will probably find it's faster to store the image in a CALayer, and mask that with a CAShapeLayer containing the path. Do not use the following methods except for testing. You will need to separately scale the image layer and the mask to make them line up. The advantage is that you can change the mask without the underlying image being rendered.
- (void) setImage:(UIImage *)image;
{
// This method should also store the image for later retrieval.
// Putting an image directly into a CALayer will stretch the image to fill the layer.
[[self layer] setContents:(id) [image CGImage]];
}
- (void) setClippingPath:(UIBezierPath *)clippingPath;
{
// This method should also store the clippingPath for later retrieval.
if (![[self layer] mask])
[[self layer] setMask:[CAShapeLayer layer]];
[(CAShapeLayer*) [[self layer] mask] setPath:[clippingPath CGPath]];
}
If you do make image clipping with layer masks work, you no longer need a drawRect method. Remove it for efficiency.
Related
I am drawing image on a custom UIView. On resizing the view, the drawing performance goes down and it starts lagging.
My image drawing code is below:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
UIBezierPath *bpath = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0, 0, width, height)];
CGContextAddPath(context, bpath.CGPath);
CGContextClip(context);
CGContextDrawImage(context, [self bounds], image.CGImage);
}
Is this approach correct?
You would be better using Instruments to find where the bottleneck is than asking on here.
However, what you will probably find is that every time the frame changes slightly the entire view will be redrawn.
If you're just using the drawRect to clip the view into an oval (I guess there's an image behind it or something) then you would be better off using a CAShapeLayer.
Create a CAShapeLayer and give it a CGPath then add it as a clipping layer to the view.layer.
Then you can change the path on the CAShapeLayer and it will update. You'll find (I think) that it performs much better too.
If your height and width are the same, you could just use a UIImageView instead of needing a custom view, and get the circular clipping by setting properties on the image view's layer. That approach draws nice and quickly.
Just set up a UIImageView (called "image" in my example) and then have your view controller do this once:
image.layer.cornerRadius = image.size.width / 2.0;
image.layer.masksToBounds = YES;
I am new to these parts of iOS API and here are some questions that are causing an infinite loop in my mind
Why does ..BeginImageContext have a size but ..GetCurrentContext does not have a size? If ..GetCurrentContext does not have a size, where does it draw? What are the bounds?
Why did they have to have two contexts, one for image and one for general graphics? Isn't an image context already a graphic context? What was the reason for the separation (I am trying to know what I don't know)
UIGraphicsGetCurrentContext() returns a reference to the current graphics context. It doesn't create one. This is important to remember because if you view it in that light, you see that it doesn't need a size parameter because the current context is just the size the graphics context was created with.
UIGraphicsBeginImageContext(aSize) is for creating graphics contexts at the UIKit level outside of UIView's drawRect: method.
Here is where you would use them.
If you had a subclass of UIView you could override its drawRect: method like so:
- (void)drawRect:(CGRect)rect
{
//the graphics context was created for you by UIView
//you can now perform your custom drawing below
//this gets you the current graphic context
CGContextRef ctx = UIGraphicsGetCurrentContext();
//set the fill color to blue
CGContextSetFillColorWithColor(ctx, [UIColor blueColor].CGColor);
//fill your custom view with a blue rect
CGContextFillRect(ctx, rect);
}
In this case, you didn't need to create the graphics context. It was created for you automatically and allows you to perform your custom drawing in the drawRect: method.
Now, in another situation, you might want to perform some custom drawing outside of the drawRect: method. Here you would use UIGraphicsBeginImageContext(aSize)
You could do something like this:
UIBezierPath *circle = [UIBezierPath
bezierPathWithOvalInRect:CGRectMake(0, 0, 200, 200)];
UIGraphicsBeginImageContext(CGSizeMake(200, 200));
//this gets the graphic context
CGContextRef context = UIGraphicsGetCurrentContext();
//you can stroke and/or fill
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor lightGrayColor].CGColor);
[circle fill];
[circle stroke];
//now get the image from the context
UIImage *bezierImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *bezierImageView = [[UIImageView alloc]initWithImage:bezierImage];
I hope this helps to clear things up for you. Also, you should be using UIGraphicsBeginImageContextWithOptions(size, opaque, scale). For further explanation of custom drawing with graphics contexts, see my answer here
You are slightly confused here.
As the name suggests UIGraphicsGetCurrentContext grabs the CURRENT context, thus it doesn't need the size, it grabs an existing context and returns it to you.
So when is there an existing context? Always? No. When the screen is rendering a frame, a context is created. This context is available in the drawRect: function, which is called to draw the view.
Normally, your functions aren't called in drawRect:, so they don't actually have a context available. This is when you call UIGraphicsBeginImageContext.
When you do that, you create an image context, then you can grab said context with UIGraphicsGetCurrentContext and work with it. And thus, you have to remember to end it with UIGraphicsEndImageContext
To clear things up further - if you modify the context in drawRect:, your changes will be shown on screen. In your own function, your changes don't show up anywhere. You have to extract the image in the context through the UIGraphicsGetImageFromCurrentImageContext() call.
Hope this helps!
I'm having a problem when it comes to clipping a photo to a custom UIBezierPath. Instead of displaying the area within the path, as it should, it displays another part of the photo instead (of different size and position than where it should be, but still the same shape as was drawn). Note I'd also like to keep the full quality of the original photo.
I've included photos and captions below to explain my problem in more depth. If anyone could give me another way to do such a thing I'll gladly start over.
Above is an illustration of the UIImage within a UIImageView, all within a UIView of which the CAShapeLayer that displays the UIBezierPath is a sublayer. For this example assume that the path in red was drawn by the user.
In this diagram is the CAShapeLayer and a graphics context created with the original image's size. How would I clip the context so that the result below is produced (please excuse the messiness of it)?
This is the result I'd like to be produced when all is said and done. Note I'd like it to still be the same size/quality as the original.
Here are some relevant portions of my code:
This clips an image to a path
-(UIImage *)maskImageToPath:(UIBezierPath *)path {
// Precondition: the path exists and is closed
assert(path != nil);
// Mask the image, keeping original size
UIGraphicsBeginImageContextWithOptions(self.size, NO, 0);
[path addClip];
[self drawAtPoint:CGPointZero];
// Extract the image
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return maskedImage;
}
This adds points to the UIBezierPath
- (void)drawClippingLine:(UIPanGestureRecognizer *)sender {
CGPoint nextPoint = [sender locationInView:self];
// If UIBezierPath *clippingPath is nil, initialize it.
// Otherwise, add another point to it.
if(!clippingPath) {
clippingPath = [UIBezierPath bezierPath];
[clippingPath moveToPoint:nextPoint];
}
else {
[clippingPath addLineToPoint:nextPoint];
}
UIGraphicsBeginImageContext(_image.size);
[clippingPath stroke];
UIGraphicsEndImageContext();
}
You are getting an incorrect crop because the UIImage is scaled to fit inside the UIImageView. Basically this means you have to translate the UIBezierPath coordinates to the correct coordinates within the UIImage. The easiest way to do this is to use a UIImageView category which will convert the points from one view (in this case the UIBezierPath, even though it's not really a view) to the correct points within the UIImageView.
You can see an example of such a category here. More specifically you will need to use the convertPointFromView: method within that category to convert each point in your UIBezierPath.
(Sorry for not writing the complete code, I'm typing on my phone)
I have a UIView where I would like to draw a Circle that extends past the frame of the UIView,
I have set the masksToBounds to NO - expecting that I can draw past outside the bounds of the UIView by 5 pixels on the right and bottom.
I expect the oval to not get clipped but it does get clipped and does not draw outside the bounds?
- (void)drawRect:(CGRect)rect
{
int width = self.bounds.size.width;
int height = self.bounds.size.height;
self.layer.masksToBounds = NO;
//// Rounded Rectangle Drawing
//// Oval Drawing
UIBezierPath* ovalPath = [UIBezierPath bezierPathWithOvalInRect: CGRectMake(0, 0, width+5, height+5)];
[[UIColor magentaColor] setFill];
[ovalPath fill];
[[UIColor blackColor] setStroke];
ovalPath.lineWidth = 1;
[ovalPath stroke];
}
From http://developer.apple.com/library/ios/#documentation/general/conceptual/Devpedia-CocoaApp/DrawingModel.html
UIView and NSView automatically configure the drawing environment of a
view before its drawRect: method is invoked. (In the AppKit framework,
configuring the drawing environment is called locking focus.) As part
of this configuration, the view class creates a graphics context for
the current drawing environment.
This graphics context is a Quartz object (CGContext) that contains
information the drawing system requires, such as the colors to apply,
the drawing mode (stroke or fill), line width and style information,
font information, and compositing options. (In the AppKit, an object
of the NSGraphicsContext class wraps a CGContext object.) A graphics
context object is associated with a window, bitmap, PDF file, or other
output device and maintains information about the current state of the
drawing environment for that entity. A view draws using a graphics
context associated with the view’s window. For a view, the graphics
context sets the default clipping region to coincide with the view’s
bounds and puts the default drawing origin at the origin of a view’s
boundaries.
Once the clipping region is set, you can only make it smaller. So, what you're trying to do isn't possible in a UIView drawRect:.
I'm not certain this will fix your problem, but it's something to look into. You're setting self.layer.masksToBounds = NO every single time you enter drawRect. You should try setting it inside the init method just once instead, A) because it's unnecessary to do it multiple times and B) because maybe there's a problem with setting it after drawRect has already been called--who knows.
I have a UIView with a transparent background, and some buttons. I would like to capture the drawing of the view, shrink it, and redraw (mirror) it elsewhere on the screen. (On top of another view.) The buttons can change, so it isn't static.
What would be the best way to do this?
Check a nice sample code http://saveme-dot-txt.blogspot.com/2011/06/dynamic-view-reflection-using.html following WWDC sessions. It uses
CAReplicatorLayer
for reflection, pretty easy to implement and looks really smooth and impressive.
The general idea will be to get a UIView's layer to draw itself into a context and then grab a UIImage out of it.
UIGraphicsBeginImageContext(view.frame.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You will also need to #import <QuartzCore/QuartzCore.h>
If you don't really need to capture the drawing (from what you describe, it seems unlikely that you need an image), create another instance of the view and apply a transform. Something like...
UIView* original [UIView alloc] initWithFrame:originalFrame];
UIView* copy = [[UIView alloc] initWithFrame:copyFrame];
// Scale down 50% and rotate 45 degrees
//
CGAffineTransform t = CGAffineTransformMakeScale(0.5, 0.5);
copy.transform = CGAffineTransformRotate(t, M_PI_4);
[someSuperView addSubview:original];
[someSuperView addSubview:copy];
// release, etc.
I added the rotation just to show that you can do a variety of different things with transformations.