how to crop to letterbox in iOS - ios

In IOS How can I crop a rectangular image to square letterbox so that it maintains the original aspect ratio and the remaining spaces are filled with black. E.g. the "pad" strategy that transloadit uses to crop/resize their images.
http://transloadit.com/docs/image-resize

For anyone who stumbles onto this question and many more like it without a clear answer, I have written a neat little category that accomplishes this at the model level by modifying the UIImage directly rather than just modifying the view. Simply use this method the returned image will be letterboxed to a square shape, regardless of which side is longer.
- (UIImage *) letterboxedImageIfNecessary
{
CGFloat width = self.size.width;
CGFloat height = self.size.height;
// no letterboxing needed, already a square
if(width == height)
{
return self;
}
// find the larger side
CGFloat squareSize = MAX(width,height);
UIGraphicsBeginImageContext(CGSizeMake(squareSize, squareSize));
// draw black background
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(context, CGRectMake(0, 0, squareSize, squareSize));
// draw image in the middle
[self drawInRect:CGRectMake((squareSize - width) / 2, (squareSize - height) / 2, width, height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Just for convenience - heres a swift rewrite of #Dima's answer:
import UIKit
extension UIImage
{
func letterboxImage() -> UIImage
{
let width = self.size.width
let height = self.size.height
// no letterboxing needed, already a square
if(width == height)
{
return self
}
// find the larger side
let squareSize = max(width, height)
UIGraphicsBeginImageContext(CGSizeMake(squareSize, squareSize))
// draw black background
let context = UIGraphicsGetCurrentContext()
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0)
CGContextFillRect(context, CGRectMake(0, 0, squareSize, squareSize))
// draw image in the middle
self.drawInRect(CGRectMake((squareSize-width) / 2, (squareSize - height) / 2, width, height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}

You have to set contentMode of the UIImageView with UIViewContentModeScaleAspectFit. You can also find this option for UIImageView if you use storyboard.
The set the backgroundColor of UIImageView to black (or other color of your choice).

Related

ScreenShot of view partially shown on screen

For taking screenshot of a view, i am using this code
-(UIImage *)renderImageFromView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return renderedImage;
}
Now suppose my view is bigger than screen size and let say its rect relative to screen is {-100, -100, screenWidth+100, screenHeight+100} and i want to take the screenshot of my this view.
I am currently using this code:
-(UIImage *)renderImageFromView:(UIView *)view withRect:(CGRect)frame
{
CGRect rect = {-100, -100, screenWidth+100, screenHeight+100};
UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage *renderedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return renderedImage;
}
But the issue:
The final image contains screen shot of view from {0, 0, screenWidth + 100, screenHeight + 100} but i was expecting it to be {-100, -100, screenWidth + 100, screenHeight + 100}.
Any Solution?
You have to first set that view into the screen with setting origin to (0,0) before that store that points in some temp variable so after taking screenshot you assign it back.
Now add that offset to width and height like if you view x is -100 then add that to width + 100 and set x = 0. same for apply for y.
now create UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0); and renderInContext and have the screen shot that you are doing correct. and don't forgot to set original frame back to view.
Hope it is helpful
The problem in your case is that the context takes just size, not the frame, so setting origin of the rect to -100, -100 has no effect. I believe the solution is to create a context of size that is +200 points bigger in both directions, and then tell the view to render itself at point (100, 100) of that context. To set the relative origin of where to draw I tried to use transform on the view layer.
Sorry for using Swift, but I believe you can easily rewrite it to ObjC:
func renderImageFromView(view: UIView) -> UIImage? {
let size = CGSize(width: view.bounds.size.width + 200, height: view.bounds.size.height + 200)
UIGraphicsBeginImageContextWithOptions(size, true, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
view.layer.transform = CATransform3DMakeAffineTransform(CGAffineTransform.identity.translatedBy(x: 100, y: 100))
view.layer.render(in: context)
view.layer.transform = CATransform3DMakeAffineTransform(CGAffineTransform.identity)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}

Crop rotated, panned and zoomed image [duplicate]

I develop an application in which i process the image using its pixels but in that image processing it takes a lot of time. Therefore i want to crop UIImage (Only middle part of image i.e. removing/croping bordered part of image).I have the develop code are,
- (NSInteger) processImage1: (UIImage*) image
{
CGFloat width = image.size.width;
CGFloat height = image.size.height;
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
// Create a new bitmap
CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);
if (context != NULL)
{
// Draw the image in the bitmap
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
NSUInteger numberOfPixels = image.size.width * image.size.height;
NSMutableArray *numberOfPixelsArray = [[[NSMutableArray alloc] initWithCapacity:numberOfPixelsArray] autorelease];
}
How i take(croping outside bordered) the middle part of UIImage?????????
Try something like this:
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Note: cropRect is smaller rectangle with middle part of the image...
I was looking for a way to get an arbitrary rectangular crop (ie., sub-image) of a UIImage.
Most of the solutions I tried do not work if the orientation of the image is anything but UIImageOrientationUp.
For example:
http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/
Typically if you use your iPhone camera, you will have other orientations like UIImageOrientationLeft, and you will not get a correct crop with the above. This is because of the use of CGImageRef/CGContextDrawImage which differ in the coordinate system with respect to UIImage.
The code below uses UI* methods (no CGImageRef), and I have tested this with up/down/left/right oriented images, and it seems to work great.
// get sub image
- (UIImage*) getSubImageFrom: (UIImage*) img WithRect: (CGRect) rect {
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y, img.size.width, img.size.height);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[img drawInRect:drawRect];
// grab image
UIImage* subImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return subImage;
}
Because I needed it just now, here is M-V 's code in Swift 4:
func imageWithImage(image: UIImage, croppedTo rect: CGRect) -> UIImage {
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
let drawRect = CGRect(x: -rect.origin.x, y: -rect.origin.y,
width: image.size.width, height: image.size.height)
context?.clip(to: CGRect(x: 0, y: 0,
width: rect.size.width, height: rect.size.height))
image.draw(in: drawRect)
let subImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return subImage!
}
It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible. It would certainly eliminate a lot of effort!
Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify. The pixel scaling is preserved from the original image.
My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done when that tab is selected.
This code is also posted at Most efficient way to draw part of an image in iOS
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}
// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {
// convert y coordinate to origin bottom-left
CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
orgX = -aperture.origin.x,
scaleX = 1.0,
scaleY = 1.0,
rot = 0.0;
CGSize size;
switch (orientation) {
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
size = CGSizeMake(aperture.size.height, aperture.size.width);
break;
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
case UIImageOrientationUp:
case UIImageOrientationUpMirrored:
size = aperture.size;
break;
default:
assert(NO);
return nil;
}
switch (orientation) {
case UIImageOrientationRight:
rot = 1.0 * M_PI / 2.0;
orgY -= aperture.size.height;
break;
case UIImageOrientationRightMirrored:
rot = 1.0 * M_PI / 2.0;
scaleY = -1.0;
break;
case UIImageOrientationDown:
scaleX = scaleY = -1.0;
orgX -= aperture.size.width;
orgY -= aperture.size.height;
break;
case UIImageOrientationDownMirrored:
orgY -= aperture.size.height;
scaleY = -1.0;
break;
case UIImageOrientationLeft:
rot = 3.0 * M_PI / 2.0;
orgX -= aperture.size.height;
break;
case UIImageOrientationLeftMirrored:
rot = 3.0 * M_PI / 2.0;
orgY -= aperture.size.height;
orgX -= aperture.size.width;
scaleY = -1.0;
break;
case UIImageOrientationUp:
break;
case UIImageOrientationUpMirrored:
orgX -= aperture.size.width;
scaleX = -1.0;
break;
}
// set the draw rect to pan the image to the right spot
CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);
// create a context for the new image
UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
CGContextRef gc = UIGraphicsGetCurrentContext();
// apply rotation and scaling
CGContextRotateCTM(gc, rot);
CGContextScaleCTM(gc, scaleX, scaleY);
// draw the image to our clipped context using the offset rect
CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);
// pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
// pop the context to get back to the default
UIGraphicsEndImageContext();
// Note: this is autoreleased
return cropped;
}
#Very small/simple Swift 5 version,
You shouldn't mix UI and CG objects, they sometimes have very different coordinate spaces. This can make you sad.
Note 👉 : self.draw(at:)
#inlinable private prefix func - (right: CGPoint) -> CGPoint
{
return CGPoint(x: -right.x, y: -right.y)
}
extension UIImage
{
public func cropped(to cropRect: CGRect) -> UIImage?
{
let renderer = UIGraphicsImageRenderer(size: cropRect.size)
return renderer.image
{
_ in
self.draw(at: -cropRect.origin)
}
}
}
Using the function
CGContextClipToRect(context, CGRectMake(0, 0, size.width, size.height));
Here's an example code, used for a different purpose but clips ok.
- (UIImage *)aspectFillToSize:(CGSize)size
{
CGFloat imgAspect = self.size.width / self.size.height;
CGFloat sizeAspect = size.width/size.height;
CGSize scaledSize;
if (sizeAspect > imgAspect) { // increase width, crop height
scaledSize = CGSizeMake(size.width, size.width / imgAspect);
} else { // increase height, crop width
scaledSize = CGSizeMake(size.height * imgAspect, size.height);
}
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClipToRect(context, CGRectMake(0, 0, size.width, size.height));
[self drawInRect:CGRectMake(0.0f, 0.0f, scaledSize.width, scaledSize.height)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
If you want a portrait crop down the center of every photo.
Use #M-V solution, & replace cropRect.
CGFloat height = imageTaken.size.height;
CGFloat width = imageTaken.size.width;
CGFloat newWidth = height * 9 / 16;
CGFloat newX = abs((width - newWidth)) / 2;
CGRect cropRect = CGRectMake(newX,0, newWidth ,height);
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/

How to spin a UIImage?

I want to create a new UIImage from another one which is turned to 45° (at its bottom left corner, clockwise). The space around the old image would be filled white or so. In the image I uploaded, the old image would be the blue one and the new image would be the actual image I linked, including the white parts.
Played a little bit in playground with Swift and here is my solution:
func rotateImage(image: UIImage!, var rotationDegree: CGFloat) -> UIImage {
// 180 degress = 540 degrees, that's why we calculate modulo
rotationDegree = rotationDegree % 360
// If degree is negative, then calculate positive
if rotationDegree < 0.0 {
rotationDegree = 360 + rotationDegree
}
// Get image size
let size = image.size
let width = size.width
let height = size.height
// Get degree which we will use for calculation
var calcDegree = rotationDegree
if calcDegree > 90 {
calcDegree = 90 - calcDegree % 90
}
// Calculate new size
let newWidth = width * CGFloat(cosf(Float(calcDegree.degreesToRadians))) + height * CGFloat(sinf(Float(calcDegree.degreesToRadians)))
let newHeight = width * CGFloat(sinf(Float(calcDegree.degreesToRadians))) + height * CGFloat(cosf(Float(calcDegree.degreesToRadians)))
let newSize = CGSize(width: newWidth, height: newHeight)
// Create context using new size, make it opaque, use screen scale
UIGraphicsBeginImageContextWithOptions(newSize, true, UIScreen.mainScreen().scale)
// Get context variable
let context = UIGraphicsGetCurrentContext()
// Set fill color to white (or any other)
// If no color needed, then set opaque to false when initialize context
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, CGRect(origin: CGPointZero, size: newSize))
// Rotate context and draw image
CGContextTranslateCTM(context, newSize.width * 0.5, newSize.height * 0.5)
CGContextRotateCTM(context, rotationDegree.degreesToRadians);
CGContextTranslateCTM(context, newSize.width * -0.5, newSize.height * -0.5)
image.drawAtPoint(CGPoint(x: (newSize.width - size.width) / 2.0, y: (newSize.height - size.height) / 2.0))
// Get image from context
let returnImage = UIGraphicsGetImageFromCurrentImageContext()
// End graphics context
UIGraphicsEndImageContext()
return returnImage
}
Do not forget to include this extension:
extension CGFloat {
var degreesToRadians : CGFloat {
return self * CGFloat(M_PI) / 180.0
}
}
I would recommend to go threw this answer to better understand how I calculated newSize after image is rotated.
If you just want to change the way an image is displayed, transform the image view that displays it.
If you really want a new rotated image, redraw the image in a transformed graphics context.
If you just want to rotate the UIImageView used to display the image, you could do this:
#define DegreesToRadians(x) ((x) * M_PI / 180.0) //put this at the top of your file
imageView.transform = CGAffineTransformMakeRotation(DegreesToRadians(45));
But if you want to rotate the actual image, do something like this:
- (UIImage *)image:(UIImage *)image rotatedByDegrees:(CGFloat)degrees
{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0, 0, image.size.width, image.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(DegreesToRadians(degrees));
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width / 2, rotatedSize.height / 2);
// // Rotate the image context
CGContextRotateCTM(bitmap, DegreesToRadians(degrees));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-image.size.width / 2, -image.size.height / 2, image.size.width, image.size.height), [image CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Above code adapted from this answer by The Lion https://stackoverflow.com/a/11667808/1757960

Show Blended Layers reveals red UIImage but it does not have an alpha channel

I have an image that does not have an alpha channel - I confirmed in Finder's Get Info panel. Yet when I put it in a UIImageView which is within a UIScrollView and I enable Show Blended Layers, the image is red which indicates it's trying to apply transparency which will be a hit on performance.
How can fix this to be green so iOS knows everything in this view is fully opaque?
I tried the following but this did not remove the red color:
self.imageView.opaque = YES;
self.scrollView.opaque = YES;
By default, UIImage instances are rendered in a Graphic Context that includes alpha channel. To avoid it, you need to generate another image using a new Graphic Context where opaque = YES.
- (UIImage *)optimizedImageFromImage:(UIImage *)image
{
CGSize imageSize = image.size;
UIGraphicsBeginImageContextWithOptions( imageSize, opaque, scale );
[image drawInRect: CGRectMake( 0, 0, imageSize.width, imageSize.height )];
UIImage *optimizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return optimizedImage;
}
Swift 3x
Xcode 9x
func optimizedImage(from image: UIImage) -> UIImage {
let imageSize: CGSize = image.size
UIGraphicsBeginImageContextWithOptions(imageSize, true, UIScreen.main.scale)
image.draw(in: CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let optimizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return optimizedImage ?? UIImage()
}

iOS UIImage masked border

Is there a way in iOS to add a border to an image which is not a simple rectangle ?
I have successfully tinted an image using the following code:
- (UIImage *)imageWithTintColor:(UIColor *)tintColor
{
UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, 0, self.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
CGContextClipToMask(context, rect, self.CGImage);
[tintColor setFill];
CGContextFillRect(context, rect);
UIImage *tintedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tintedImage;
}
Lets say for example i wanted to add a blue border to this image (Note: this is NOT an 'A' NSString, but an UIImage object example)
When i alter the code above to [color setStroke] and CGContextStrokeRect(context, rect), the image just disappears.
I've already learned from SO that this is possible using CoreImage + EdgeDetection, but isn't there a "simple" CoreGraphics - way similar to tinting an image ?
Thank you!
-- EDIT --
Please note that I want to add the border to the image itself. I don't want to create the border effect through an UIImageView !
The border should match the shape of the image before applying the border.
In this case: blue outline for the outside + inside of the 'A'.
This is not a very satisfying method I would say, but works to some extent.
You can make use of adding shadow to the layer. For this you need to strip off the white portion in the image, leaving the character surrounded by alpha.
I used the below code.
UIImage *image = [UIImage imageNamed:#"image_name_here.png"];
CALayer *imageLayer = [CALayer layer];
imageLayer.frame = CGRectMake(0, 0, 200.0f, 200.0f);
imageLayer.contents = (id)image.CGImage;
imageLayer.position = self.view.layer.position;
imageLayer.shadowColor = [UIColor whiteColor].CGColor;
imageLayer.shadowOffset = CGSizeMake(0.0f, 0.0f);
imageLayer.shadowOpacity = 1.0f;
imageLayer.shadowRadius = 4.0f;
[self.view.layer addSublayer:imageLayer];
And the result would be something like this.
I realize this was asked 2 years ago and is probably not still relevant to you, however I'd like to submit my solution in case anyone else stumbles upon this question while looking for an answer (like I just did).
One way to generate a border around an image is by tinting the image to your border color (say black), and then overlaying a smaller copy of your image onto the middle of the tinted one.
I built my solution upon your imageWithTintColor:tintColor function as an extension to UIImage in Swift 3:
extension UIImage {
func imageByApplyingBorder(ofSize borderSize: CGFloat, andColor borderColor: UIColor) -> UIImage {
/*
Get the scale of the smaller image
If borderSize is 10% then smaller image should be 90% of its original size
*/
let scale: CGFloat = 1.0 - borderSize
// Generate tinted background image of original size
let backgroundImage = imageWithTintColor(borderColor)
// Generate smaller image of scale
let smallerImage = imageByResizing(by: scale)
UIGraphicsBeginImageContext(backgroundImage.size)
// Draw background image first, followed by smaller image in the middle
backgroundImage.draw(at: CGPoint(x: 0, y: 0))
smallerImage.draw(at: CGPoint(
x: (backgroundImage.size.width - smallerImage.size.width) / 2,
y: (backgroundImage.size.height - smallerImage.size.height) / 2
))
let borderedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return borderedImage
}
func imageWithTintColor(_ color: UIColor) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()!
// Turn up-side-down (later transformations turns image back)
context.translateBy(x: 0, y: size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.setBlendMode(.normal)
// Mask to visible part of image (turns image right-side-up)
context.clip(to: CGRect(x: 0, y: 0, width: size.width, height: size.height), mask: self.cgImage!)
// Fill with input color
color.setFill()
context.fill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let tintedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return tintedImage
}
func imageByResizing(by scale: CGFloat) -> UIImage {
// Determine new width and height
let width = scale * size.width
let height = scale * size.height
// Draw a scaled down image
UIGraphicsBeginImageContextWithOptions(CGSize(width: width, height: height), false, 0.0)
draw(in: CGRect(x: 0, y: 0, width: width, height: height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
}
Please note that the borderSize parameter of imageByApplyingBorder:ofSize:andColor: is given as a percentage of the original image size. If your image is 100x100 px and borderSize = 0.1, then your will get an image of size 100x100 px with a 10x10 px internal border.*
Here is an example image generated using the above function on a 1000x1000px circular center clip of one of the stock iOS Simulator photos:
Any suggestions for optimizations or other approaches are welcome.
You can use below code to add a border to the UIImageView:
[self.testImage.layer setBorderColor:[UIColor blueColor].CGColor];
[self.testImage.layer setBorderWidth:5.0];
Try this
#import <QuartzCore/QuartzCore.h>
[yourUIImageView.layer setBorderColor:[UIColor blueColor].CGColor];
[yourUIImageView.layer setBorderWidth:6.0];
If someone looks for an outside transparent border for UIImageView or any other View, look at my solution here or here.

Resources