Merging two UIImage's - ios

I want to merge two UIImage's but I'm having some difficulties. I'm playing around with a drawing app where the user when she or he is done will merge their final drawing with their original image they wanted to draw on.
topImage and bottomImage are both final images and need to be merged with their corresponding aspectFit aka ratio.
Then the user can at the end combine both images into one but the thing is that bottomImageView's UIImage will vary in size.
I've been trying to merge them without effecting the newImage or the Aspect ratio (fit)
Set up:
I have two UIImageView which are set to (width: 310, height: 400)
bottomImageView
bottomImage is JPEG: size ( 600, 800) //other images will vary in ratio
topImageView
topImage is PNG (for transparency): size (310, 400)
The bottomImage which is the part of that facebook post is the what the user will theoretically "draw". The topImage (red lines/circle) is the what should be merged with the bottom image.
How can I merge them together without changing the size of the UIImage's?
I've tried two different ways but to no avail.
way 1:
let size = CGSize(width: 300, height: 400)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage.drawInRect(areaSize)
topImage.drawInRect(areaSize, blendMode: CGBlendMode.Normal, alpha: 1.0)
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Way 1, doesn't work because, it creates a new canvas (300, 400) which ultimately alters the size which distorts both my bottomImage and topImage.
Way 2:
let bottomImageView: UIImageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 310, height: 400))
let topImageView: UIImageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 310, height: 400))
bottomImageView.image = topImage
topImageView.image = bottomImage
bottomImageView.contentMode = .ScaleAspectFit
topImageView.contentMode = .ScaleAspectFit
UIGraphicsBeginImageContext(bottomImageView.frame.size)
bottomImageView.image?.drawInRect(CGRect(x: 0, y: 0, width: bottomImageView.frame.size.width, height: bottomImageView.frame.size.height), blendMode: CGBlendMode.Normal, alpha: 1.0)
topImageView.image?.drawInRect(CGRect(x:0, y: 0, width: topImageView.frame.size.width, height: topImageView.frame.size.height), blendMode: CGBlendMode.Normal, alpha: 1.0)
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsGetImageFromCurrentImageContext()
Way 2: I thought that by putting two UIImageViews then "pressing" them together would work but it ended up distorting my newImage like way 1.
Is it possible to combine them without effecting making the newImage look distorted?

Related

Why draw(in: CGRect) draws border around ellipse when it should not?

This is my simple function I use for drawing an image in context:
let renderer=UIGraphicsImageRenderer(size: CGSize(width: 330, height: 330))
let img=renderer.image{ ctx in
let circle=CGRect(x:0,y:0,width: 330, height: 330)
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addEllipse(in: circle)
ctx.cgContext.drawPath(using: .fill)
let image = UIImage(named: "1")!
image.draw(in: CGRect(x: 80, y: 80, width: 100, height: 100))
}
And the result is following:
As you can see there is output of UIGraphicsImageRenderer with border around ellipse. Why? Border is not defined anywhere, but it is printed.
The image named 1 is the following one:
NOTE:
This issue appears only when compiling ios app. Using playground everything is fine and ok.
Does your UIImageView have a cornerRadius applied to its layer? That can cause a thin gray border like you see here. If you create a circular image, like you have with UIGraphicsImageRenderer, you should not need to do any masking or cornerRadius on the UIImageView.
If you only want to fill the path, and not stroke it, one could use fillPath rather than drawPath.
FWIW, you could also just bypass the CoreGraphics context and just fill the oval directly:
let image = renderer.image { _ in
UIColor.white.setFill()
UIBezierPath(ovalIn: CGRect(x: 0, y: 0, width: 330, height: 330))
.fill()
UIImage(named: "1")!
.draw(in: CGRect(x: 80, y: 80, width: 100, height: 100))
}
OK, the updated code still does not match.
First, in your posted image, the background is not white.
Second, even accounting for that, there is no "edge" on the rendered UIImage.
So, I'm going to make a guess here....
Assuming you execute the img = renderer.image { .... code block, and then you set imageView.image = img, my suspicion is that you have something like this:
imageView.backgroundColor = .lightGray
imageView.layer.cornerRadius = imageView.frame.height / 2.0
So, the lightGray "circle" is the lightGray background anti-aliased to the .cornerRadius.
I would be that if set:
imageView.backgroundColor = .clear
and do not set the layer's cornerRadius (no need to), your ellipse border will be gone.
If it's still there, then you need to provide some code that actually reproduces the issue.
Edit
I'm still not seeing the "border" when setting the rendered image to an image view, but...
Doing some debug inspecting and using the "1" image you added, there IS a difference.
Try this, and see if it gets rid of the border:
let fmt = UIGraphicsImageRendererFormat()
fmt.preferredRange = .standard
let renderer = UIGraphicsImageRenderer(size: CGSize(width: 330, height: 330), format: fmt)
You can then use either:
let img = renderer.image { ctx in
let circle = CGRect(x:0, y:0, width: 330, height: 330)
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addEllipse(in: circle)
ctx.cgContext.drawPath(using: .fill)
if let image = UIImage(named: "1") {
image.draw(in: CGRect(x: 80, y: 80, width: 100, height: 100))
}
}
or Rob's suggested:
let img = renderer.image { _ in
UIColor.white.setFill()
UIBezierPath(ovalIn: CGRect(x: 0, y: 0, width: 330, height: 330))
.fill()
if let image = UIImage(named: "1") {
image.draw(in: CGRect(x: 80, y: 80, width: 100, height: 100))
}
}

Merge and edit 2 UIImage

I was looking around to find a solution but I didn’t.
Actually I have a block of code that merge 2 image in one.
It’s like I take a photo and the apply a .png on it.
But I would like to let the user move the top image in order to choose the position before save the image
Thanks for any possible help to how I could do it
Here is my func:
func mergeFrame(bottomImage: UIImage, topImage: UIImage) -> UIImage{
let size = CGSize(width: bottomImage.size.width, height: bottomImage.size.height)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage.draw(in: areaSize)
topImage.draw(in: areaSize, blendMode: .normal, alpha: 0.8)
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
If you need to move the top image, the topImage should have a flexible drawing area.
topImage.draw(in: topArea, blendMode: .normal, alpha: 0.8)
Then use an UIPanGesture to change the topAread.origin base on the gesture.translation during State.Changed.
In State.End of the gesture, call posted mergeFrame(::) to get the composed image.
Hope you got it.

UIImage (Frame) and UIImage (Picture) merge

I have multiple sizes of frames, which can be hard coded, or server will decide. I have to select Image from Gallery, which definitely can be of many dimensions.
I am selecting Image from Gallery
I am generating white background UIImage, using code.
let size = CGSize(width: 424/2, height: 664/2)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.white.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let background_image: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Now, what I want, to make another Image, which keep Leading 20 pixel, Top 20 Pixel, and width and height 20 pixel smaller than original background.
How can I achieve it.
What I tried before coming to StackOverflow.
func mergedImageWith(frontImage:UIImage?, backgroundImage: UIImage?) -> UIImage{
if (backgroundImage == nil) {
return frontImage!
}
let size = CGSize(width: 424/2, height: 664/2)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.white.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let backgroundImage2: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
backgroundImage2?.draw(in: CGRect.init(x: 0, y: 0, width: size.width, height: size.height))
frontImage?.draw(in: getAspectFillFrame(sizeImageView: size2, sizeImage: (frontImage?.size)!))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
here background image is created with aspect fill, but issue is of starting position and complete height and width.
In very simple words. Its like making custom frames and merge them with images(aspect fill) for printing.
can anyone help me out
Thanks.
Try not ending your image context until all of the images are drawn (I am also including some code that I have working, edited down a bit)
class layeredImageView: UIImageView {
var imageBackground:UIImage!
var imageForeground:UIImage!
UIGraphicsBeginImageContextWithOptions(self.frame.size, false, UIScreen.main.scale)
self.image?.draw(in: self.frame)
imageBackground.draw(in: CGRect(<rect>)
imageForeground.draw(in: CGRect(<rect>)
self.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}

how to place a scrolling image behind a stationary image in Swift

I am trying to port an artificial horizon app I wrote for a PC in c# to swift. It has a bezel image which does not move and behind it is a horizon image which can move up and down behind the bezel. The "window" part of the bezel is yellow so in c# I just made the yellow opaque.
In swift I stated with the horizon inside of a UIScrollView but I'm not sure how to get that to work with a second image that should not scroll.
Not all that up to speed on this swift stuff, can someone point me in the right direction?
let view: UIView = UIView.init(frame: CGRect(x: 0, y: 0, width: 500, height: 500))
let scrollView = UIScrollView.init(frame: view.bounds)
view.addSubview(scrollView)
let backImage: UIImage = fromColor(UIColor.redColor(), size: CGSize(width: 1000, height: 1000))
let backImageView: UIImageView = UIImageView.init(image: backImage)
scrollView.addSubview(backImageView)
scrollView.contentSize = CGSize.init(width: backImage.size.width, height: backImage.size.height)
let frontImage: UIImage = fromColor(UIColor.blueColor(), size: CGSize(width: 100, height: 100))
let layer: CALayer = CALayer.init()
layer.frame = CGRect.init(x: view.center.x - 50, y: view.center.y - 50, width: 100, height: 100)
layer.contents = frontImage.CGImage
view.layer.addSublayer(layer)
func fromColor(color: UIColor, size: CGSize) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillRect(context, rect)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
fromColor is a helper method.
Result of the code

Image blurred slightly when drawing in drawRect (Swift)

I am building a circle crop function in Swift. I pretty much have everything working however when I save the cropped photo it is slightly blurry:
Not sure if it is visible here or not but on my iPhone I can see a difference, slight blurring. I'm not zooming in more than the actual size of the image. I am using a UIScrollView with max zoom factor set to 1.0. The crop code is:
func didTapOk() {
let scale = scrollView.zoomScale
let newSize = CGSize(width: image!.size.width*scale, height: image!.size.height*scale)
let offset = scrollView.contentOffset
UIGraphicsBeginImageContext(CGSize(width: 240, height: 240))
let circlePath = UIBezierPath(ovalInRect: CGRect(x: 0, y: 0, width: 240, height: 240))
circlePath.addClip()
var sharpRect = CGRect(x: -offset.x, y: -offset.y, width: newSize.width, height: newSize.height)
sharpRect = CGRectIntegral(sharpRect)
image?.drawInRect(sharpRect)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIImageWriteToSavedPhotosAlbum(finalImage, nil, nil, nil)
}
Here I am trying to use CGRectIntegral to improve the result but it doesn't seem to make any difference. Any pointers on what I could do to improve this?
What's happening is your crop is blurring because it hasn't accounted for the scale of your screen (whether you're using a retina display etc).
Use UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), true, 0) to account for the retina screen. Check here for more details on how to use this function.
I suspect that, although the accepted answer above is working for you, it is only working due to the screen scale being the same as the image scale.
As you are not rendering your image from the screen itself (you're rendering from a UIImage to another UIImage), the screen scale should be irrelevant.
You should instead be passing in the scale of the image when you create your context, like so:
UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), false, image.scale)
Here's a way in Swift 3 to disable any interpolation.
UIGraphicsBeginImageContextWithOptions(imageSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = CGInterpolationQuality.none
Additionally, since this is often for pixel drawing purposes, you may want to add:
// Ensure the image is not drawn upside down
context?.saveGState()
context?.translateBy(x: 0.0, y: maxHeight)
context?.scaleBy(x: 1.0, y: -1.0)
// Make the drawing
context?.draw(yourImage, in: CGRect(x: 0, y: 0, width: maxWidth, height: maxHeight))
// Restore the GState of the context
context?.restoreGState()

Resources