iOS UIImage masked border - ios

Is there a way in iOS to add a border to an image which is not a simple rectangle ?
I have successfully tinted an image using the following code:
- (UIImage *)imageWithTintColor:(UIColor *)tintColor
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
CGContextClipToMask(context, rect, self.CGImage);
[tintColor setFill];
CGContextFillRect(context, rect);
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
Lets say for example i wanted to add a blue border to this image (Note: this is NOT an 'A' NSString, but an UIImage object example)
When i alter the code above to [color setStroke] and CGContextStrokeRect(context, rect), the image just disappears.
I've already learned from SO that this is possible using CoreImage + EdgeDetection, but isn't there a "simple" CoreGraphics - way similar to tinting an image ?
Thank you!
-- EDIT --
Please note that I want to add the border to the image itself. I don't want to create the border effect through an UIImageView !
The border should match the shape of the image before applying the border.
In this case: blue outline for the outside + inside of the 'A'.

This is not a very satisfying method I would say, but works to some extent.
You can make use of adding shadow to the layer. For this you need to strip off the white portion in the image, leaving the character surrounded by alpha.
I used the below code.
UIImage *image = [UIImage imageNamed:#"image_name_here.png"];
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, 200.0f, 200.0f);
imageLayer.contents = (id)image.CGImage;
imageLayer.position = self.view.layer.position;
imageLayer.shadowColor = [UIColor whiteColor].CGColor;
imageLayer.shadowOffset = CGSizeMake(0.0f, 0.0f);
imageLayer.shadowOpacity = 1.0f;
imageLayer.shadowRadius = 4.0f;
[self.view.layer addSublayer:imageLayer];
And the result would be something like this.

I realize this was asked 2 years ago and is probably not still relevant to you, however I'd like to submit my solution in case anyone else stumbles upon this question while looking for an answer (like I just did).
One way to generate a border around an image is by tinting the image to your border color (say black), and then overlaying a smaller copy of your image onto the middle of the tinted one.
I built my solution upon your imageWithTintColor:tintColor function as an extension to UIImage in Swift 3:
extension UIImage {
func imageByApplyingBorder(ofSize borderSize: CGFloat, andColor borderColor: UIColor) -> UIImage {
/*
Get the scale of the smaller image
If borderSize is 10% then smaller image should be 90% of its original size
*/
let scale: CGFloat = 1.0 - borderSize
// Generate tinted background image of original size
let backgroundImage = imageWithTintColor(borderColor)
// Generate smaller image of scale
let smallerImage = imageByResizing(by: scale)
UIGraphicsBeginImageContext(backgroundImage.size)
// Draw background image first, followed by smaller image in the middle
backgroundImage.draw(at: CGPoint(x: 0, y: 0))
smallerImage.draw(at: CGPoint(
x: (backgroundImage.size.width - smallerImage.size.width) / 2,
y: (backgroundImage.size.height - smallerImage.size.height) / 2
))
let borderedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return borderedImage
}
func imageWithTintColor(_ color: UIColor) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()!
// Turn up-side-down (later transformations turns image back)
context.translateBy(x: 0, y: size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.setBlendMode(.normal)
// Mask to visible part of image (turns image right-side-up)
context.clip(to: CGRect(x: 0, y: 0, width: size.width, height: size.height), mask: self.cgImage!)
// Fill with input color
color.setFill()
context.fill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let tintedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return tintedImage
}
func imageByResizing(by scale: CGFloat) -> UIImage {
// Determine new width and height
let width = scale * size.width
let height = scale * size.height
// Draw a scaled down image
UIGraphicsBeginImageContextWithOptions(CGSize(width: width, height: height), false, 0.0)
draw(in: CGRect(x: 0, y: 0, width: width, height: height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
}
Please note that the borderSize parameter of imageByApplyingBorder:ofSize:andColor: is given as a percentage of the original image size. If your image is 100x100 px and borderSize = 0.1, then your will get an image of size 100x100 px with a 10x10 px internal border.*
Here is an example image generated using the above function on a 1000x1000px circular center clip of one of the stock iOS Simulator photos:
Any suggestions for optimizations or other approaches are welcome.

You can use below code to add a border to the UIImageView:
[self.testImage.layer setBorderColor:[UIColor blueColor].CGColor];
[self.testImage.layer setBorderWidth:5.0];

Try this
#import <QuartzCore/QuartzCore.h>
[yourUIImageView.layer setBorderColor:[UIColor blueColor].CGColor];
[yourUIImageView.layer setBorderWidth:6.0];

If someone looks for an outside transparent border for UIImageView or any other View, look at my solution here or here.

Related

ScreenShot of view partially shown on screen

For taking screenshot of a view, i am using this code
-(UIImage *)renderImageFromView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return renderedImage;
}
Now suppose my view is bigger than screen size and let say its rect relative to screen is {-100, -100, screenWidth+100, screenHeight+100} and i want to take the screenshot of my this view.
I am currently using this code:
-(UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame
{
CGRect rect = {-100, -100, screenWidth+100, screenHeight+100};
UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return renderedImage;
}
But the issue:
The final image contains screen shot of view from {0, 0, screenWidth + 100, screenHeight + 100} but i was expecting it to be {-100, -100, screenWidth + 100, screenHeight + 100}.
Any Solution?
You have to first set that view into the screen with setting origin to (0,0) before that store that points in some temp variable so after taking screenshot you assign it back.
Now add that offset to width and height like if you view x is -100 then add that to width + 100 and set x = 0. same for apply for y.
now create UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0); and renderInContext and have the screen shot that you are doing correct. and don't forgot to set original frame back to view.
Hope it is helpful
The problem in your case is that the context takes just size, not the frame, so setting origin of the rect to -100, -100 has no effect. I believe the solution is to create a context of size that is +200 points bigger in both directions, and then tell the view to render itself at point (100, 100) of that context. To set the relative origin of where to draw I tried to use transform on the view layer.
Sorry for using Swift, but I believe you can easily rewrite it to ObjC:
func renderImageFromView(view: UIView) -> UIImage? {
let size = CGSize(width: view.bounds.size.width + 200, height: view.bounds.size.height + 200)
UIGraphicsBeginImageContextWithOptions(size, true, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
view.layer.transform = CATransform3DMakeAffineTransform(CGAffineTransform.identity.translatedBy(x: 100, y: 100))
view.layer.render(in: context)
view.layer.transform = CATransform3DMakeAffineTransform(CGAffineTransform.identity)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}

Show Blended Layers reveals red UIImage but it does not have an alpha channel

I have an image that does not have an alpha channel - I confirmed in Finder's Get Info panel. Yet when I put it in a UIImageView which is within a UIScrollView and I enable Show Blended Layers, the image is red which indicates it's trying to apply transparency which will be a hit on performance.
How can fix this to be green so iOS knows everything in this view is fully opaque?
I tried the following but this did not remove the red color:
self.imageView.opaque = YES;
self.scrollView.opaque = YES;
By default, UIImage instances are rendered in a Graphic Context that includes alpha channel. To avoid it, you need to generate another image using a new Graphic Context where opaque = YES.
- (UIImage *)optimizedImageFromImage:(UIImage *)image
{
CGSize imageSize = image.size;
UIGraphicsBeginImageContextWithOptions( imageSize, opaque, scale );
[image drawInRect: CGRectMake( 0, 0, imageSize.width, imageSize.height )];
UIImage *optimizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return optimizedImage;
}
Swift 3x
Xcode 9x
func optimizedImage(from image: UIImage) -> UIImage {
let imageSize: CGSize = image.size
UIGraphicsBeginImageContextWithOptions(imageSize, true, UIScreen.main.scale)
image.draw(in: CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let optimizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return optimizedImage ?? UIImage()
}

Can I draw shapes like circle,rectangle,line etc outside drawRect method

Can I draw shapes like circle, rectangle, line etc outside drawRect method using
CGContextRef contextRef = UIGraphicsGetCurrentContext();
or is it mandatory to use it inside drawRect only.
Please help me, let me know how can I draw shapes outside drawRect method.
Actually i want to go on plotting dots on touchesMoved event.
This is my code for drawing a dot.
CGContextRef contextRef = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(contextRef, 0, 255, 0, 1);
CGContextFillEllipseInRect(contextRef, CGRectMake(theMovedPoint.x, theMovedPoint.y, 8, 8));
Basically you need a context to draw something. You can assume context as a white paper. UIGraphicsGetCurrentContext will return null if you are not in a valid context.In drawRect you get the context of the view.
Having said that, you can draw outside drawRect Method. You can begin an imageContext to draw things and add it to your view.
Look at the below example taken from here,
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image
{
// begin a graphics context of sufficient size
UIGraphicsBeginImageContext(image.size);
// draw original image into the context
[image drawAtPoint:CGPointZero];
// get the context for CoreGraphics
CGContextRef ctx = UIGraphicsGetCurrentContext();
// set stroking color and draw circle
[[UIColor redColor] setStroke];
// make circle rect 5 px from border
CGRect circleRect = CGRectMake(0, 0,
image.size.width,
image.size.height);
circleRect = CGRectInset(circleRect, 5, 5);
// draw circle
CGContextStrokeEllipseInRect(ctx, circleRect);
// make image out of bitmap context
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
// free the context
UIGraphicsEndImageContext();
return retImage;
}
For Swift 4
func imageByDrawingCircle(on image: UIImage) -> UIImage {
UIGraphicsBeginImageContextWithOptions(CGSize(width: image.size.width, height: image.size.height), false, 0.0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let ctx = UIGraphicsGetCurrentContext()!
// set stroking color and draw circle
ctx.setStrokeColor(UIColor.red.cgColor)
// make circle rect 5 px from border
var circleRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
circleRect = circleRect.insetBy(dx: 5, dy: 5)
// draw circle
ctx.strokeEllipse(in: circleRect)
// make image out of bitmap context
let retImage = UIGraphicsGetImageFromCurrentImageContext()!
// free the context
UIGraphicsEndImageContext()
return retImage;
}

how to crop to letterbox in iOS

In IOS How can I crop a rectangular image to square letterbox so that it maintains the original aspect ratio and the remaining spaces are filled with black. E.g. the "pad" strategy that transloadit uses to crop/resize their images.
http://transloadit.com/docs/image-resize
For anyone who stumbles onto this question and many more like it without a clear answer, I have written a neat little category that accomplishes this at the model level by modifying the UIImage directly rather than just modifying the view. Simply use this method the returned image will be letterboxed to a square shape, regardless of which side is longer.
- (UIImage *) letterboxedImageIfNecessary
{
CGFloat width = self.size.width;
CGFloat height = self.size.height;
// no letterboxing needed, already a square
if(width == height)
{
return self;
}
// find the larger side
CGFloat squareSize = MAX(width,height);
UIGraphicsBeginImageContext(CGSizeMake(squareSize, squareSize));
// draw black background
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(context, CGRectMake(0, 0, squareSize, squareSize));
// draw image in the middle
[self drawInRect:CGRectMake((squareSize - width) / 2, (squareSize - height) / 2, width, height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Just for convenience - heres a swift rewrite of #Dima's answer:
import UIKit
extension UIImage
{
func letterboxImage() -> UIImage
{
let width = self.size.width
let height = self.size.height
// no letterboxing needed, already a square
if(width == height)
{
return self
}
// find the larger side
let squareSize = max(width, height)
UIGraphicsBeginImageContext(CGSizeMake(squareSize, squareSize))
// draw black background
let context = UIGraphicsGetCurrentContext()
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0)
CGContextFillRect(context, CGRectMake(0, 0, squareSize, squareSize))
// draw image in the middle
self.drawInRect(CGRectMake((squareSize-width) / 2, (squareSize - height) / 2, width, height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
You have to set contentMode of the UIImageView with UIViewContentModeScaleAspectFit. You can also find this option for UIImageView if you use storyboard.
The set the backgroundColor of UIImageView to black (or other color of your choice).

iOS make part of an UIImage transparent

I have an UIImage where part of it has been selected by the user to clear out (make transparent). To make the selection I used NSBezierPath.
How can I clear/make transparent part of an UIImage in iOS?
First, I assume you have a UIBezierPath (iOS), not an NSBezierPath (Mac OS X).
To do this, you will need to use core graphics, creating an image context, drawing the UIImage into that context, and then clearing the region specified by the NSBezierPath.
// Create an image context containing the original UIImage.
UIGraphicsBeginImageContext(originalImage.size);
[originalImage drawAtPoint:CGPointZero];
// Clip to the bezier path and clear that portion of the image.
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context,bezierPath.CGPath)
CGContextClip(context);
CGContextClearRect(context,CGRectMake(0,0,originalImage.size.width,originalImage.size.height);
// Build a new UIImage from the image context.
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Swift4 solution from landweber's above answer:
UIGraphicsBeginImageContext(image!.size)
image!.draw(at: CGPoint.zero)
let context:CGContext = UIGraphicsGetCurrentContext()!;
let bez = UIBezierPath(rect: CGRect(x: 0, y: 0, width: 10, height: 10))
context.addPath(bez.cgPath)
context.clip();
context.clear(CGRect(x: 0,y: 0,width: image!.size.width,height: image!.size.height));
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!;
UIGraphicsEndImageContext();

Resources