How to crop image portion that is enclosed inside a random polygon (4 sided but not rectangle). Just wanted to know which method to follow not the code.
You can do this easily in Core Graphics.
You just need to create a new image context, add the path to the context, then crop the context to the path. You can then draw your image in this and get out a cropped image.
-(UIImage*) cropImage:(UIImage*)image withPath:(UIBezierPath*)path { // where the UIBezierPath is defined in the UIKit coordinate system (0,0) is top left
CGRect r = CGPathGetBoundingBox(path.CGPath); // the rect to draw our image in (minimum rect that the path occupies).
UIGraphicsBeginImageContextWithOptions(r.size, NO, image.scale); // begin image context, with transparency & the scale of the image.
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, -r.origin.x, -r.origin.y); // translate context so that when we add the path, it starts at (0,0).
CGContextAddPath(ctx, path.CGPath); // add path.
CGContextClip(ctx); // clip any future drawing to the path region.
[image drawInRect:(CGRect){CGPointZero, image.size}]; // draw image
UIImage* i = UIGraphicsGetImageFromCurrentImageContext(); // get image from context
UIGraphicsEndImageContext(); // clean up and finish context
return i; // return image
}
For example, if we take a screenshot of your question (I couldn't find any other images lying about!)
and use the following code....
UIImage* i = [UIImage imageNamed:#"3.png"];
UIBezierPath* p = [UIBezierPath bezierPath];
[p moveToPoint:CGPointMake(0, 0)];
[p addLineToPoint:CGPointMake(1500, 500)];
[p addLineToPoint:CGPointMake(500, 1200)];
UIImage* i1 = [self cropImage:i withPath:p];
This would be the output...
You could even add this to a UIImage category if you're going to be cropping images regularly.
Updated for Swift 3.
I noticed that a lot of the implementations out there seem to want the background to be white or transparent, and I really needed just wanted the background color to be black.
extension UIImage {
func crop(withPath: UIBezierPath, andColor: UIColor) -> UIImage {
let r: CGRect = withPath.cgPath.boundingBox
UIGraphicsBeginImageContextWithOptions(r.size, false, self.scale)
if let context = UIGraphicsGetCurrentContext() {
let rect = CGRect(origin: .zero, size: size)
context.setFillColor(andColor.cgColor)
context.fill(rect)
context.translateBy(x: -r.origin.x, y: -r.origin.y)
context.addPath(withPath.cgPath)
context.clip()
}
draw(in: CGRect(origin: .zero, size: size))
guard let image = UIGraphicsGetImageFromCurrentImageContext() else {
return UIImage()
}
UIGraphicsEndImageContext()
return image
}
}
Related
I want to draw a circle on UIImageView. I have tried it but it didn't work.
This is a example image of what i want to achieve:
The circle should be drawn on where user taps on UIImageView and I want to do it without adding any sublayer.
Is it some way to do this?
so far i have used this code from the internet but it didn't worked.
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image
pointX:(float) x
PointY:(float) y
{
// begin a graphics context of sufficient size
UIGraphicsBeginImageContext(image.size);
// draw original image into the context
[image drawAtPoint:CGPointZero];
// get the context for CoreGraphics
CGContextRef ctx = UIGraphicsGetCurrentContext();
// set stroking color and draw circle
[[UIColor redColor] setStroke];
// make circle rect 5 px from border
CGRect circleRect = CGRectMake(0, 0,
image.size.width,
image.size.height);
circleRect = CGRectInset(circleRect, x, y);
// draw circle
CGContextStrokeEllipseInRect(ctx, circleRect);
// make image out of bitmap context
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
// free the context
UIGraphicsEndImageContext();
return retImage;
}
Suppose your object's view is square, Set the cornerRadius to half of the width or height.
maskToBounds set your image as per shape of rounded imageView
For example,add this code for your requirement,
yourImageView.layer.cornerRadius = yourImageView.imageView.frame.size.height /2;
yourImageView.layer.masksToBounds = YES;
Hope this will help you :)
Is there a way in iOS to add a border to an image which is not a simple rectangle ?
I have successfully tinted an image using the following code:
- (UIImage *)imageWithTintColor:(UIColor *)tintColor
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
CGContextClipToMask(context, rect, self.CGImage);
[tintColor setFill];
CGContextFillRect(context, rect);
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
Lets say for example i wanted to add a blue border to this image (Note: this is NOT an 'A' NSString, but an UIImage object example)
When i alter the code above to [color setStroke] and CGContextStrokeRect(context, rect), the image just disappears.
I've already learned from SO that this is possible using CoreImage + EdgeDetection, but isn't there a "simple" CoreGraphics - way similar to tinting an image ?
Thank you!
-- EDIT --
Please note that I want to add the border to the image itself. I don't want to create the border effect through an UIImageView !
The border should match the shape of the image before applying the border.
In this case: blue outline for the outside + inside of the 'A'.
This is not a very satisfying method I would say, but works to some extent.
You can make use of adding shadow to the layer. For this you need to strip off the white portion in the image, leaving the character surrounded by alpha.
I used the below code.
UIImage *image = [UIImage imageNamed:#"image_name_here.png"];
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, 200.0f, 200.0f);
imageLayer.contents = (id)image.CGImage;
imageLayer.position = self.view.layer.position;
imageLayer.shadowColor = [UIColor whiteColor].CGColor;
imageLayer.shadowOffset = CGSizeMake(0.0f, 0.0f);
imageLayer.shadowOpacity = 1.0f;
imageLayer.shadowRadius = 4.0f;
[self.view.layer addSublayer:imageLayer];
And the result would be something like this.
I realize this was asked 2 years ago and is probably not still relevant to you, however I'd like to submit my solution in case anyone else stumbles upon this question while looking for an answer (like I just did).
One way to generate a border around an image is by tinting the image to your border color (say black), and then overlaying a smaller copy of your image onto the middle of the tinted one.
I built my solution upon your imageWithTintColor:tintColor function as an extension to UIImage in Swift 3:
extension UIImage {
func imageByApplyingBorder(ofSize borderSize: CGFloat, andColor borderColor: UIColor) -> UIImage {
/*
Get the scale of the smaller image
If borderSize is 10% then smaller image should be 90% of its original size
*/
let scale: CGFloat = 1.0 - borderSize
// Generate tinted background image of original size
let backgroundImage = imageWithTintColor(borderColor)
// Generate smaller image of scale
let smallerImage = imageByResizing(by: scale)
UIGraphicsBeginImageContext(backgroundImage.size)
// Draw background image first, followed by smaller image in the middle
backgroundImage.draw(at: CGPoint(x: 0, y: 0))
smallerImage.draw(at: CGPoint(
x: (backgroundImage.size.width - smallerImage.size.width) / 2,
y: (backgroundImage.size.height - smallerImage.size.height) / 2
))
let borderedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return borderedImage
}
func imageWithTintColor(_ color: UIColor) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()!
// Turn up-side-down (later transformations turns image back)
context.translateBy(x: 0, y: size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.setBlendMode(.normal)
// Mask to visible part of image (turns image right-side-up)
context.clip(to: CGRect(x: 0, y: 0, width: size.width, height: size.height), mask: self.cgImage!)
// Fill with input color
color.setFill()
context.fill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let tintedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return tintedImage
}
func imageByResizing(by scale: CGFloat) -> UIImage {
// Determine new width and height
let width = scale * size.width
let height = scale * size.height
// Draw a scaled down image
UIGraphicsBeginImageContextWithOptions(CGSize(width: width, height: height), false, 0.0)
draw(in: CGRect(x: 0, y: 0, width: width, height: height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
}
Please note that the borderSize parameter of imageByApplyingBorder:ofSize:andColor: is given as a percentage of the original image size. If your image is 100x100 px and borderSize = 0.1, then your will get an image of size 100x100 px with a 10x10 px internal border.*
Here is an example image generated using the above function on a 1000x1000px circular center clip of one of the stock iOS Simulator photos:
Any suggestions for optimizations or other approaches are welcome.
You can use below code to add a border to the UIImageView:
[self.testImage.layer setBorderColor:[UIColor blueColor].CGColor];
[self.testImage.layer setBorderWidth:5.0];
Try this
#import <QuartzCore/QuartzCore.h>
[yourUIImageView.layer setBorderColor:[UIColor blueColor].CGColor];
[yourUIImageView.layer setBorderWidth:6.0];
If someone looks for an outside transparent border for UIImageView or any other View, look at my solution here or here.
I want to render a UIView to an image. My go-to UIView category for this is
- (UIImage *)num_renderToImage
{
UIGraphicsBeginImageContext(self.bounds.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
However in this case, the UIView has elements that draw outside its bounds and the above clips them. Altering the size passed to UIGraphicsBeginImageContext doesn't help since the size grows down and to the right, but these elements are above and to the left. What's the right way to do this?
In the scenario above, with a UIView clipping a UIButton that draws outside its bounds, you might try:
- (IBAction)snapshot:(id)sender {
UIButton *button = sender;
UIView *v = button.superview;
// Prepare the rectangle to be drawn
CGRect allTheViews = CGRectUnion(v.bounds, button.frame);
UIGraphicsBeginImageContext(allTheViews.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// This is what you differently
CGContextTranslateCTM(context, -allTheViews.origin.x, -allTheViews.origin.y);
// This part is the same as before
[v.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[UIImagePNGRepresentation(img) writeToFile:#"/tmp/foo.png" atomically:NO];
}
Here we're taking the union of what we want to draw, then translating the CTM so it's in view in the graphics context we're drawing into.
(This example, in particular, is hooked up to the action of the button and writes the UIImage of the button and containing view out to a file. You can adjust as your needs require.)
I made this more general version in Swift 5.0 if anyone has the problem. Just set the offset to the value you want :
private func snapshotView(cell:UIView) -> UIImageView
{
// by how much extra point out of the view bounds you want to draw your snapshot
let offset:CGFloat = 10
let frame = cell.bounds.union(CGRect(x:-offset * 2.0,y:-offset * 2.0,
width:cell.bounds.width,
height: cell.bounds.height)).size
UIGraphicsBeginImageContextWithOptions(frame, false ,0)
let context = UIGraphicsGetCurrentContext()
context!.translateBy(x: offset, y: offset);
cell.layer.render(in: context!)
let snapshotImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return UIImageView(image: snapshotImage)
}
Can I draw shapes like circle, rectangle, line etc outside drawRect method using
CGContextRef contextRef = UIGraphicsGetCurrentContext();
or is it mandatory to use it inside drawRect only.
Please help me, let me know how can I draw shapes outside drawRect method.
Actually i want to go on plotting dots on touchesMoved event.
This is my code for drawing a dot.
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(contextRef, 0, 255, 0, 1);
CGContextFillEllipseInRect(contextRef, CGRectMake(theMovedPoint.x, theMovedPoint.y, 8, 8));
Basically you need a context to draw something. You can assume context as a white paper. UIGraphicsGetCurrentContext will return null if you are not in a valid context.In drawRect you get the context of the view.
Having said that, you can draw outside drawRect Method. You can begin an imageContext to draw things and add it to your view.
Look at the below example taken from here,
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image
{
// begin a graphics context of sufficient size
UIGraphicsBeginImageContext(image.size);
// draw original image into the context
[image drawAtPoint:CGPointZero];
// get the context for CoreGraphics
CGContextRef ctx = UIGraphicsGetCurrentContext();
// set stroking color and draw circle
[[UIColor redColor] setStroke];
// make circle rect 5 px from border
CGRect circleRect = CGRectMake(0, 0,
image.size.width,
image.size.height);
circleRect = CGRectInset(circleRect, 5, 5);
// draw circle
CGContextStrokeEllipseInRect(ctx, circleRect);
// make image out of bitmap context
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
// free the context
UIGraphicsEndImageContext();
return retImage;
}
For Swift 4
func imageByDrawingCircle(on image: UIImage) -> UIImage {
UIGraphicsBeginImageContextWithOptions(CGSize(width: image.size.width, height: image.size.height), false, 0.0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let ctx = UIGraphicsGetCurrentContext()!
// set stroking color and draw circle
ctx.setStrokeColor(UIColor.red.cgColor)
// make circle rect 5 px from border
var circleRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
circleRect = circleRect.insetBy(dx: 5, dy: 5)
// draw circle
ctx.strokeEllipse(in: circleRect)
// make image out of bitmap context
let retImage = UIGraphicsGetImageFromCurrentImageContext()!
// free the context
UIGraphicsEndImageContext()
return retImage;
}
I have an UIImage where part of it has been selected by the user to clear out (make transparent). To make the selection I used NSBezierPath.
How can I clear/make transparent part of an UIImage in iOS?
First, I assume you have a UIBezierPath (iOS), not an NSBezierPath (Mac OS X).
To do this, you will need to use core graphics, creating an image context, drawing the UIImage into that context, and then clearing the region specified by the NSBezierPath.
// Create an image context containing the original UIImage.
UIGraphicsBeginImageContext(originalImage.size);
[originalImage drawAtPoint:CGPointZero];
// Clip to the bezier path and clear that portion of the image.
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context,bezierPath.CGPath)
CGContextClip(context);
CGContextClearRect(context,CGRectMake(0,0,originalImage.size.width,originalImage.size.height);
// Build a new UIImage from the image context.
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Swift4 solution from landweber's above answer:
UIGraphicsBeginImageContext(image!.size)
image!.draw(at: CGPoint.zero)
let context:CGContext = UIGraphicsGetCurrentContext()!;
let bez = UIBezierPath(rect: CGRect(x: 0, y: 0, width: 10, height: 10))
context.addPath(bez.cgPath)
context.clip();
context.clear(CGRect(x: 0,y: 0,width: image!.size.width,height: image!.size.height));
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!;
UIGraphicsEndImageContext();