How to merge two UIImages while keeping the aspect ratio and size? - ios

The code is added to Github to let you understand the real problem.
This is the hierarchy:
-- ViewController.View P [width: 375, height: 667]
---- UIImageView A [width: 375, height: 667] Name: imgBackground
[A is holding an image of size(1287,1662)]
---- UIImageView B [width: 100, height: 100] Name: imgForeground
[B is holding an image of size(2400,982)]
I am trying to merge A with B but the result is stretched.
This is the merge code:
func mixImagesWith(frontImage:UIImage?, backgroundImage: UIImage?, atPoint point:CGPoint, ofSize signatureSize:CGSize) -> UIImage {
let size = self.imgBackground.frame.size
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
backgroundImage?.draw(in: CGRect.init(x: 0, y: 0, width: size.width, height: size.height))
frontImage?.draw(in: CGRect.init(x: point.x, y: point.y, width: signatureSize.width, height: signatureSize.height))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
Note:
.contentMode = .scaleAspectFit
Code works but the result is stretched.
See this line in code, let size = self.imgBackground.frame.size – I need to change this to fix the problem. Find the origin of subview with respect to UIImage size
Here's the screenshot to understand the problem:
What should I do to get the proper output of merge function?

You have two bugs in your code:
You should also calculate aspect for document image to fit it into UIImageView. In mergeImages() replace:
img.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
with:
img.draw(in: getAspectFitFrame(sizeImgView: size, sizeImage: img.size))
When calculating aspect you center image horizontally/vertically if its width/height less then UIImageView width/height. But instead of comparing newWidth and newHeight you should compare factors:
if hfactor > vfactor {
y = (sizeImgView.height - newHeight) / 2
} else {
x = (sizeImgView.width - newWidth) / 2
}

Try bellow code it works for me, hope it works for you too,
func addWaterMarkToImage(img:UIImage, sizeWaterMark:CGRect, waterMarkImage:UIImage, completion : ((UIImage)->())?){
handler = completion
let img2:UIImage = waterMarkImage
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
UIGraphicsBeginImageContext(img.size)
img.draw(in: rect)
let frameAspect:CGRect = getAspectFitFrame(sizeImgView: sizeWaterMark.size, sizeImage: waterMarkImage.size)
let frameOrig:CGRect = CGRect(x: sizeWaterMark.origin.x+frameAspect.origin.x, y: sizeWaterMark.origin.y+frameAspect.origin.y, width: frameAspect.size.width, height: frameAspect.size.height)
img2.draw(in: frameOrig, blendMode: .normal, alpha: 1)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if handler != nil {
handler!(result!)
}
}
//MARK - Get Aspect Fit frame of UIImage
func getAspectFitFrame(sizeImgView:CGSize, sizeImage:CGSize) -> CGRect{
let imageSize:CGSize = sizeImage
let viewSize:CGSize = sizeImgView
let hfactor : CGFloat = imageSize.width/viewSize.width
let vfactor : CGFloat = imageSize.height/viewSize.height
let factor : CGFloat = max(hfactor, vfactor)
// Divide the size by the greater of the vertical or horizontal shrinkage factor
let newWidth : CGFloat = imageSize.width / factor
let newHeight : CGFloat = imageSize.height / factor
var x:CGFloat = 0.0
var y:CGFloat = 0.0
if newWidth > newHeight{
y = (sizeImgView.height - newHeight)/2
}
if newHeight > newWidth{
x = (sizeImgView.width - newWidth)/2
}
let newRect:CGRect = CGRect(x: x, y: y, width: newWidth, height: newHeight)
return newRect
}

Related

UIBarButtonItem custom view aspect fill swift 4

I'm developing an app that displays, in the main ViewController, the user's profile image inside a round UIBarButtonItem, so I'm using a custom Button with cornerRadius and clipsToBounds enabled, I am resizing the UIImage's width to 75% of NavigationBar's height to fit well in it, I also used button.imageView?.contentMode = .scaleAspectFill
When I use a squared image(width = height) it works perfect, but if I use a portrait or landscape image it looks like if BarButton was using .scaleAspectFit
I already tried to create first a squared UIImage cropping original profile image without any luck.
This is my Bar button code:
func setProfileButton() {
let width = self.navigationController!.navigationBar.frame.size.height * 0.75
if let image = ResizeImage(CFUser.current!.getProfileImage(), to: width) {
let button = UIButton(type: .custom)
button.setImage(image, for: .normal)
button.imageView?.contentMode = .scaleAspectFill
button.addTarget(self, action: #selector(goProfile), for: .touchUpInside)
button.frame = CGRect(x: 0, y: 0, width: width, height: width)
button.layer.cornerRadius = button.bounds.width / 2
button.clipsToBounds = true
let barButton = UIBarButtonItem(customView: button)
self.navigationItem.rightBarButtonItem = barButton
}
}
This is ResizeImage code:
func ResizeImage(_ image: UIImage, to width: CGFloat) -> UIImage? {
let size = image.size
let ratio = width / image.size.width
let height = image.size.height * ratio
let targetSize = CGSize(width: width, height: height)
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSize(width: size.width * heightRatio,height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
let rect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Here is the app working with squared image
And this is with a portrait image
Thanks for your help! :)
PD: I'm using Xcode 10 and Swift 4
As the portrait image has different dimensions unlike square image, calculate the minimum dimension and modify the image size accordingly.
Replace method ResizeImage(_ image: UIImage, to width: CGFloat) -> UIImage? with below.
func ResizeImage(_ image: UIImage, to width: CGFloat) -> UIImage? {
let ratio = width / image.size.width
let height = image.size.height * ratio
let dimension = min(width, height)
let targetSize = CGSize(width: dimension, height: dimension)
let rect = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height)
UIGraphicsBeginImageContextWithOptions(targetSize, false, 1.0)
image.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Square image:
Portrait image:

Create a Frame image from single image

I want to make a frame image from single image. Below is the code I'm using
func createFrameFromImage(image:UIImage , size :CGSize) -> UIImage
{
let imageSize = CGSize.init(width: size.width , height: size.height)
UIGraphicsBeginImageContext(imageSize)
let width = imageSize.width
let height = imageSize.height
var letTop = image
let rightTop = rotateImageByAngles(image: &letTop, angles: .pi/2) // correct
let rightBottom = rotateImageByAngles(image: &letTop, angles: -.pi) // correct
let leftBottom = rotateImageByAngles(image: &letTop, angles: -.pi/2) // correct
letTop.draw(in: CGRect(x: 0, y: 0, width: width/2, height: height/2))
rightTop.draw(in: CGRect(x: (width/2) , y: 0, width: width/2, height: height/2))
leftBottom.draw(in: CGRect(x: 0, y: height/2, width: width/2, height: height/2))
rightBottom.draw(in: CGRect(x: (width/2) , y: (height/2), width: width/2, height: height/2))
guard let finalImage = UIGraphicsGetImageFromCurrentImageContext() else { return rightTop }
UIGraphicsEndImageContext()
return finalImage
}
Above function takes one piece of image and create four different images by rotating a single at specific angle and merge them to make a frame image. Issue I'm facing is maintaining image ratio. for ex: if create final image of size 320 * 120 it squeezes image horizontally.Attaching screen shot of output. I want to show this new generated image on wall using ARkit.
Final frame image
Given Image
// Adding Frame
// 1 inch = 72 points
//converting size inch to points to create frame image
let frameWidth = (size.width + 1)
let frameHeight = (size.height + 1)
let imgFrameUnit = UIImage(named: "img.png")!
let imgFrame = Singleton.shared.createFrameFromImage(image: imgFrameUnit, size: CGSize(width: frameWidth , height: frameHeight))
let frame = SCNNode(geometry: SCNPlane(width: ((frameWidth * 2.54) / 100), height: ((frameHeight * 2.54) / 100))) // in meters
frame.geometry?.firstMaterial?.diffuse.contents = imgFrame
frame.name = "frame"
nodeWeCanChange?.addChildNode(frame)
Any Help would be really Appreciated?

How to merge two UIImages while keeping their position, size and aspect ratios?

I am trying to merge two UIImages of different sizes.
I have UIImage A is of the following size: 1287 × 1662 pixels
And UIImage B is of the following size: 200 × 200 pixels
I am showing A and B in following UIImageViews.
UIImageView backgroundImageView of the size: 375 x 667
And UIImageView foregroundImageView of the size: 100 x 100
User can move foregroundImageView to any position above the backgroundImageView.
This is the merging code:
let previewImage:UIImage? = mergeImages(img: imgBackground.image!, sizeWaterMark: CGRect.init(origin: imgForeground.frame.origin, size: CGSize.init(width: 100, height: 100)), waterMarkImage: imgForeground.image!)
func mergeImages(img:UIImage, sizeWaterMark:CGRect, waterMarkImage:UIImage) -> UIImage {
let size = self.imgBackground.frame.size
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
img.draw(in: getAspectFitFrame(sizeImgView: size, sizeImage: img.size))
let frameAspect:CGRect = getAspectFitFrame(sizeImgView: sizeWaterMark.size, sizeImage: waterMarkImage.size)
let frameOrig:CGRect = CGRect(x: sizeWaterMark.origin.x+frameAspect.origin.x, y: sizeWaterMark.origin.y+frameAspect.origin.y, width: frameAspect.size.width, height: frameAspect.size.height)
waterMarkImage.draw(in: frameOrig, blendMode: .normal, alpha: 1)
let result:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return result
}
func getAspectFitFrame(sizeImgView:CGSize, sizeImage:CGSize) -> CGRect {
let imageSize:CGSize = sizeImage
let viewSize:CGSize = sizeImgView
let hfactor : CGFloat = imageSize.width/viewSize.width
let vfactor : CGFloat = imageSize.height/viewSize.height
let factor : CGFloat = max(hfactor, vfactor)
// Divide the size by the greater of the vertical or horizontal shrinkage factor
let newWidth : CGFloat = imageSize.width / factor
let newHeight : CGFloat = imageSize.height / factor
var x:CGFloat = 0.0
var y:CGFloat = 0.0
if hfactor > vfactor {
y = (sizeImgView.height - newHeight) / 2
} else {
x = (sizeImgView.width - newWidth) / 2
}
let newRect:CGRect = CGRect(x: x, y: y, width: newWidth, height: newHeight)
return newRect
}
This is actually merging and giving me what I am looking for. But it's reducing the size of merged image. As you can see this line in the mergeImages function.
let size = self.imgBackground.frame.size
I want the size should be the original UIImage A size. So if I change it to this,
let size = self.imgBackground.image!.size
This will change the location of the B over A, after merging.
For testing, you can download and check the source code from here.
What should I do to keep the original size as it is while having the exact position of B over A with proper aspect ratio?
I made utility functions static (it's even better to move them in separate file) to be sure that they are not using ViewController instance properties and methods.
In mergeImages I removed:
let size = self.imgBackground.frame.size
and replaced size with img.size. It's the same as using self.imgBackground.image!.size as you described in question.
Because source and target image sizes are the same there is no need to adjust aspect and we simply replace:
img.draw(in: getAspectFitFrame(sizeImgView: size, sizeImage: img.size))
with
img.draw(in: CGRect(origin: CGPoint(x: 0, y: 0), size: img.size))
Also I extracted aspect factor calculation to separate function getFactor to make code more granular and made getAspectFitFrame return not only CGRect but also aspect factor (it'll be useful later).
So utility functions are now looking like:
static func mergeImages(img: UIImage, sizeWaterMark: CGRect, waterMarkImage: UIImage) -> UIImage {
UIGraphicsBeginImageContextWithOptions(img.size, false, UIScreen.main.scale)
img.draw(in: CGRect(origin: CGPoint(x: 0, y: 0), size: img.size))
let (frameAspect, _) = getAspectFitFrame(from: sizeWaterMark.size, to: waterMarkImage.size)
let frameOrig = CGRect(x: sizeWaterMark.origin.x + frameAspect.origin.x, y: sizeWaterMark.origin.y + frameAspect.origin.y, width: frameAspect.size.width, height: frameAspect.size.height)
waterMarkImage.draw(in: frameOrig, blendMode: .normal, alpha: 1)
let result = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return result
}
static func getAspectFitFrame(from: CGSize, to: CGSize) -> (CGRect, CGFloat) {
let (hfactor, vfactor, factor) = ViewController.getFactor(from: from, to: to)
// Divide the size by the greater of the vertical or horizontal shrinkage factor
let newWidth = to.width / factor
let newHeight = to.height / factor
var x: CGFloat = 0.0
var y: CGFloat = 0.0
if hfactor > vfactor {
y = (from.height - newHeight) / 2
} else {
x = (from.width - newWidth) / 2
}
return (CGRect(x: x, y: y, width: newWidth, height: newHeight), factor)
}
static func getFactor(from: CGSize, to: CGSize) -> (CGFloat, CGFloat, CGFloat) {
let hfactor = to.width / from.width
let vfactor = to.height / from.height
return (hfactor, vfactor, max(hfactor, vfactor))
}
Also you need another utility function to calculate scaled water mark origin and size:
static func getScaledFrame(from: CGSize, to: CGSize, target: CGRect) -> CGRect {
let (aspectFitFrame, factor) = ViewController.getAspectFitFrame(from: from, to: to)
return CGRect(
origin: CGPoint(
x: (target.origin.x - aspectFitFrame.origin.x) * factor,
y: (target.origin.y - aspectFitFrame.origin.y) * factor),
size: CGSize(width: target.width * factor, height: target.height * factor)
)
}
Now you are ready to render merged image:
let previewImage = ViewController.mergeImages(
img: imgBackground.image!,
sizeWaterMark: ViewController.getScaledFrame(from: imgBackground.frame.size, to: imgBackground.image!.size, target: imgForeground.frame),
waterMarkImage: imgForeground.image!
)

Resize and Crop 2 Images affected the original image quality

Supposed that I have a UIImage's object on the UIViewController, and I want to set the image from the Controller. Basically what I want to do is, merging two images together, that the first image is the 5 star with blue color :
and the second image is the 5 star with grey color :
It's intended for rating image. Since the maximum rating is 5, then I have to multiply it by 20 to get 100 point to make the calculation easier. Please see code for more details logic.
So I have this (BM_RatingHelper.swift) :
static func getRatingImageBasedOnRating(rating: CGFloat, width: CGFloat, height: CGFloat) -> UIImage {
// available maximum rating is 5.0, so we have to multiply it by 20 to achieve 100.0 point
let ratingImageWidth = ( width / 100.0 ) * ( rating * 20.0 )
// get active rating image
let activeRatingImage = BM_ImageHelper.resize(UIImage(named: "StarRatingFullActive")!, targetSize: CGSize(width: width, height: height))
let activeRatingImageView = UIImageView(frame: CGRectMake(0, 0, ratingImageWidth, height));
activeRatingImageView.image = BM_ImageHelper.crop(activeRatingImage, x: 0, y: 0, width: ratingImageWidth, height: height);
// get inactive rating image
let inactiveRatingImage = BM_ImageHelper.resize(UIImage(named: "StarRatingFullInactive")!, targetSize: CGSize(width: width, height: height))
let inactiveRatingImageView = UIImageView(frame: CGRectMake(ratingImageWidth, 0, ( 100.0 - ratingImageWidth ), height));
inactiveRatingImageView.image = BM_ImageHelper.crop(inactiveRatingImage, x: ratingImageWidth, y: 0, width: ( 100.0 - ratingImageWidth ), height: height);
// combine the images
let ratingView = UIView.init(frame: CGRect(x: 0, y: 0, width: width, height: height))
ratingView.backgroundColor = BM_Color.colorForType(BM_ColorType.ColorWhiteTransparent)
ratingView.addSubview(activeRatingImageView)
ratingView.addSubview(inactiveRatingImageView)
return ratingView.capture()
}
The BM_ImageHelper.swift :
import UIKit
class BM_ImageHelper: NSObject {
// http://stackoverflow.com/questions/158914/cropping-an-uiimage
static func crop(image: UIImage, x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat) -> UIImage {
let rect = CGRect(x: x, y: y, width: width, height: height)
let imageRef = CGImageCreateWithImageInRect(image.CGImage, rect)!
let croppedImage = UIImage(CGImage: imageRef)
return croppedImage
}
// http://iosdevcenters.blogspot.com/2015/12/how-to-resize-image-in-swift-in-ios.html
static func resize(image: UIImage, targetSize: CGSize) -> UIImage {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSizeMake(size.width * heightRatio, size.height * heightRatio)
} else {
newSize = CGSizeMake(size.width * widthRatio, size.height * widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
extension UIView {
// http://stackoverflow.com/a/34895760/897733
func capture() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.frame.size, self.opaque, UIScreen.mainScreen().scale)
self.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
I call that function like (supposed that the image need to filled is ratingImage) :
self.ratingImage.image =
BM_RatingHelper.getRatingImageBasedOnRating(3.7, width: 100.0, height: 20.0)
The code works perfectly, but the merged image is so low in quality although I have use the high quality image. This is the image for 3.7 rating :
What should I do to merge the images without lose the original quality? Thanks.
In your BM_ImageHelper.resize method its giving the scale 1.0. It should be the devices's screens scale.
Change it to
UIGraphicsBeginImageContextWithOptions(newSize, false, UIScreen.mainScreen().scale)
UPDATE
Also change your crop method to address the scale, like
static func crop(image: UIImage, x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat) -> UIImage {
let transform = CGAffineTransformMakeScale(image.scale, image.scale)
let rect = CGRect(x: x, y: y, width: width, height: height)
let transformedCropRect = CGRectApplyAffineTransform(rect, transform);
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropRect)!
let croppedImage = UIImage(CGImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return croppedImage
}

How to merge two UIImages?

I am trying to merge two different images and create a new one. This is the way I would like to do:
I have this image (A):
It's a PNG image and I would like to merge this one with another image (B) which I took from the phone to create something like this:
I need a function who merge A with B creating C. The size must remain from the A image and the image B should auto adapt the size to fit into the polaroid (A). Is it possible to do that? Thank for your help!
UPDATE
Just one thing, the image (A) is a square and the image i took is a 16:9, how can i fix that?? If i use your function the image (B) that i took become stretched!
Hope this may help you,
var bottomImage = UIImage(named: "bottom.png")
var topImage = UIImage(named: "top.png")
var size = CGSize(width: 300, height: 300)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage!.draw(in: areaSize)
topImage!.draw(in: areaSize, blendMode: .normal, alpha: 0.8)
var newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
All the Best :)
Swift 5: Extension for UIImage
extension UIImage {
func mergeWith(topImage: UIImage) -> UIImage {
let bottomImage = self
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: bottomImage.size.width, height: bottomImage.size.height)
bottomImage.draw(in: areaSize)
topImage.draw(in: areaSize, blendMode: .normal, alpha: 1.0)
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return mergedImage
}
}
Swift 4 UIImage extension that enables easy image merging / overlaying.
extension UIImage {
func overlayWith(image: UIImage, posX: CGFloat, posY: CGFloat) -> UIImage {
let newWidth = size.width < posX + image.size.width ? posX + image.size.width : size.width
let newHeight = size.height < posY + image.size.height ? posY + image.size.height : size.height
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
draw(in: CGRect(origin: CGPoint.zero, size: size))
image.draw(in: CGRect(origin: CGPoint(x: posX, y: posY), size: image.size))
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
}
This way the overlay picture will be much cleaner:
class func mergeImages(imageView: UIImageView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0.0)
imageView.superview!.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
Objective C version of this solution with top image re-centered logic :
-(UIImage *)getImageInclosedWithinAnotherImage
{
float innerImageSize = 20;
UIImage *finalImage;
UIImage *outerImage = [UIImage imageNamed:#"OuterImage.png"];
UIImage *innerImage = [UIImage imageNamed:#"InnerImage.png"];
CGSize outerImageSize = CGSizeMake(40, 40); // Provide custom size or size of your actual image
UIGraphicsBeginImageContext(outerImageSize);
//calculate areaSize for re-centered inner image
CGRect areSize = CGRectMake(((outerImageSize.width/2) - (innerImageSize/2)), ((outerImageSize.width/2) - (innerImageSize/2)), innerImageSize, innerImageSize);
[outerImage drawInRect:CGRectMake(0, 0, outerImageSize.width, outerImageSize.height)];
[innerImage drawInRect:areSize blendMode:kCGBlendModeNormal alpha:1.0];
finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
The upvoted answer stretches the background image changing its ratio. The solution below fixes that by rendering the image from a UIView that contains the two image views as subviews.
ANSWER YOU ARE LOOKING FOR (Swift 4):
func blendImages(_ img: UIImage,_ imgTwo: UIImage) -> Data? {
let bottomImage = img
let topImage = imgTwo
let imgView = UIImageView(frame: CGRect(x: 0, y: 0, width: 306, height: 306))
let imgView2 = UIImageView(frame: CGRect(x: 0, y: 0, width: 306, height: 306))
// - Set Content mode to what you desire
imgView.contentMode = .scaleAspectFill
imgView2.contentMode = .scaleAspectFit
// - Set Images
imgView.image = bottomImage
imgView2.image = topImage
// - Create UIView
let contentView = UIView(frame: CGRect(x: 0, y: 0, width: 306, height: 306))
contentView.addSubview(imgView)
contentView.addSubview(imgView2)
// - Set Size
let size = CGSize(width: 306, height: 306)
// - Where the magic happens
UIGraphicsBeginImageContextWithOptions(size, true, 0)
contentView.drawHierarchy(in: contentView.bounds, afterScreenUpdates: true)
guard let i = UIGraphicsGetImageFromCurrentImageContext(),
let data = UIImageJPEGRepresentation(i, 1.0)
else {return nil}
UIGraphicsEndImageContext()
return data
}
The returned image data doubles the size of the image, so set the size of the views at half the desired size.
EXAMPLE: I wanted the width and height of the image to be 612, so I set the view frames width and height to 306)
// Enjoy :)
Slightly modified version of answer by budidino. This implementation also handles negative posX and posY correctly.
extension UIImage {
func overlayWith(image: UIImage, posX: CGFloat, posY: CGFloat) -> UIImage {
let newWidth = posX < 0 ? abs(posX) + max(self.size.width, image.size.width) :
size.width < posX + image.size.width ? posX + image.size.width : size.width
let newHeight = posY < 0 ? abs(posY) + max(size.height, image.size.height) :
size.height < posY + image.size.height ? posY + image.size.height : size.height
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
let originalPoint = CGPoint(x: posX < 0 ? abs(posX) : 0, y: posY < 0 ? abs(posY) : 0)
self.draw(in: CGRect(origin: originalPoint, size: self.size))
let overLayPoint = CGPoint(x: posX < 0 ? 0 : posX, y: posY < 0 ? 0 : posY)
image.draw(in: CGRect(origin: overLayPoint, size: image.size))
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
}

Resources