Why draw(in: CGRect) draws border around ellipse when it should not? - ios

This is my simple function I use for drawing an image in context:
let renderer=UIGraphicsImageRenderer(size: CGSize(width: 330, height: 330))
let img=renderer.image{ ctx in
let circle=CGRect(x:0,y:0,width: 330, height: 330)
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addEllipse(in: circle)
ctx.cgContext.drawPath(using: .fill)
let image = UIImage(named: "1")!
image.draw(in: CGRect(x: 80, y: 80, width: 100, height: 100))
}
And the result is following:
As you can see there is output of UIGraphicsImageRenderer with border around ellipse. Why? Border is not defined anywhere, but it is printed.
The image named 1 is the following one:
NOTE:
This issue appears only when compiling ios app. Using playground everything is fine and ok.

Does your UIImageView have a cornerRadius applied to its layer? That can cause a thin gray border like you see here. If you create a circular image, like you have with UIGraphicsImageRenderer, you should not need to do any masking or cornerRadius on the UIImageView.
If you only want to fill the path, and not stroke it, one could use fillPath rather than drawPath.
FWIW, you could also just bypass the CoreGraphics context and just fill the oval directly:
let image = renderer.image { _ in
UIColor.white.setFill()
UIBezierPath(ovalIn: CGRect(x: 0, y: 0, width: 330, height: 330))
.fill()
UIImage(named: "1")!
.draw(in: CGRect(x: 80, y: 80, width: 100, height: 100))
}

OK, the updated code still does not match.
First, in your posted image, the background is not white.
Second, even accounting for that, there is no "edge" on the rendered UIImage.
So, I'm going to make a guess here....
Assuming you execute the img = renderer.image { .... code block, and then you set imageView.image = img, my suspicion is that you have something like this:
imageView.backgroundColor = .lightGray
imageView.layer.cornerRadius = imageView.frame.height / 2.0
So, the lightGray "circle" is the lightGray background anti-aliased to the .cornerRadius.
I would be that if set:
imageView.backgroundColor = .clear
and do not set the layer's cornerRadius (no need to), your ellipse border will be gone.
If it's still there, then you need to provide some code that actually reproduces the issue.
Edit
I'm still not seeing the "border" when setting the rendered image to an image view, but...
Doing some debug inspecting and using the "1" image you added, there IS a difference.
Try this, and see if it gets rid of the border:
let fmt = UIGraphicsImageRendererFormat()
fmt.preferredRange = .standard
let renderer = UIGraphicsImageRenderer(size: CGSize(width: 330, height: 330), format: fmt)
You can then use either:
let img = renderer.image { ctx in
let circle = CGRect(x:0, y:0, width: 330, height: 330)
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addEllipse(in: circle)
ctx.cgContext.drawPath(using: .fill)
if let image = UIImage(named: "1") {
image.draw(in: CGRect(x: 80, y: 80, width: 100, height: 100))
}
}
or Rob's suggested:
let img = renderer.image { _ in
UIColor.white.setFill()
UIBezierPath(ovalIn: CGRect(x: 0, y: 0, width: 330, height: 330))
.fill()
if let image = UIImage(named: "1") {
image.draw(in: CGRect(x: 80, y: 80, width: 100, height: 100))
}
}

Related

How to crop view with mask, but leave cropped-out parts partially opaque instead of hidden?

I want to crop out a portion of a view. I followed this article: "How to mask one UIView using another UIView", and this is my code currently:
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
/// show a green border around the image view's original frame
let backgroundView = UIView(frame: CGRect(x: 50, y: 50, width: 200, height: 300))
backgroundView.layer.borderColor = UIColor.green.cgColor
backgroundView.layer.borderWidth = 4
view.addSubview(backgroundView)
let imageView = UIImageView(frame: CGRect(x: 50, y: 50, width: 200, height: 300))
imageView.image = UIImage(named: "TestImage")
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
view.addSubview(imageView)
// MARK: - Mask code
let maskView = UIView(frame: CGRect(x: 80, y: 100, width: 100, height: 100))
maskView.backgroundColor = .blue /// ensure opaque
view.addSubview(maskView)
imageView.mask = maskView
}
}
The mask is working fine:
Without mask
With mask
However, I want the parts of the image view that are cropped out to still be there, but just have a lower alpha. This is what it should look like:
I've tried changing maskView.alpha to 0.25, but that just makes the part with the mask be less opaque.
How can I make the cropped-out parts still be there, but just a bit more transparent?
Preferably I don't want to make another view, because eventually I'll be using this on a camera preview layer β€” an extra view might have a cost on performance.
Edit: matt's answer
I tried adding a subview with a background color with less alpha:
let maskView = UIView(frame: CGRect(x: 0, y: 0, width: 200, height: 300))
maskView.backgroundColor = UIColor.blue.withAlphaComponent(0.3)
let maskView2 = UIView(frame: CGRect(x: 80, y: 100, width: 100, height: 100))
maskView2.backgroundColor = UIColor.blue.withAlphaComponent(1)
maskView2.alpha = 0
maskView.addSubview(maskView2)
imageView.mask = maskView
But this is the result:
It’s all in the transparency of the colors you paint the mask with. (The hues β€” what we usually think of as color β€” are irrelevant.) The masking depends upon the degree of transparency. Areas of the mask that are partially transparent will make the masked view be partially transparent.
So make the mask the whole size of the target view, and make the whole mask a partially transparent color, except for the central area which is an opaque color.

How to erase the thin side line right bottom?

I want compound view's shape from Oval shape subtract Rect shape,
so first I create a mask image:
let paths = CGMutablePath()
//Oval shape path at left
let leftPath = CGPath(ellipseIn: CGRect(x: 0, y: 0, width: 200, height: 150), transform: nil)
//Rect shape path at right
let rightPath = CGPath(roundedRect: CGRect(x: 100, y: 100, width: 120, height: 100), cornerWidth: 8, cornerHeight: 8, transform: nil)
paths.addPath(leftPath)
paths.addPath(rightPath)
UIGraphicsBeginImageContext(CGSize(width: 220, height: 200))
let ctx = UIGraphicsGetCurrentContext()
ctx?.addPath(paths)
ctx?.clip(using: .evenOdd)
ctx?.addPath(leftPath.cgPath)
ctx?.clip(using: .evenOdd)
ctx?.setFillColor(UIColor.red.cgColor)
ctx?.fill(CGRect(x: 0, y: 0, width: 220, height: 200))
//the mask image
let maskImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
next I use this image for mask:
let maskView = UIImageView(image: maskImage)
maskView.contentMode = .center
imageView.mask = maskView
run App, I get this:
looks good?not really...
if you look carefully,you will notice a thin line at view right bottom
It's not OK for me!
How do I erase the sideline??? Thanks :)
Draw the ellipse, then clear the rounded rect.
import UIKit
let ellipse = CGPath(ellipseIn: CGRect(x: 0, y: 0, width: 200, height: 150), transform: nil)
let roundedRect = CGPath(roundedRect: CGRect(x: 100, y: 100, width: 120, height: 100), cornerWidth: 8, cornerHeight: 8, transform: nil)
let maskImage = UIGraphicsImageRenderer(size: CGSize(width: 220, height: 200)).image { rendererContext in
let ctx = rendererContext.cgContext
ctx.setFillColor(UIColor.red.cgColor)
ctx.addPath(ellipse)
ctx.fillPath()
ctx.setBlendMode(.clear)
ctx.addPath(roundedRect)
ctx.fillPath()
}
Result:

Clamping CIImage returns strange sizes

I'm trying to crop an image and extend the colors from the edges with clamping, but is returning a strange image size after that:
let image = UIImage(named: "frame_1")
var ciimage = CIImage(image: image!)
print("\(ciimage!.extent.width) x \(ciimage!.extent.height)")
// at this point it returns "480.0 x 360.0" that is ok
ciimage = ciimage!.clamping(to: CGRect(x: 50, y: 50, width: 480.0, height: 360.0))
print("\(ciimage!.extent.width) x \(ciimage!.extent.height)")
//now it returns two strange values: "1.79769313486232e+308 x 1.79769313486232e+308"
Shouldn't it return 480.0 and 360.0 after clamping? What I am doing wrong?
OK I solved the problem, cropping it after clamping:
ciimage = ciimage!.clamping(to: CGRect(x: 50, y: 50, width: 480, height: 360))
ciimage = ciimage!.cropping(to: CGRect(x: 0, y: 0, width: 480, height: 360))
The result is what I expected.

how to place a scrolling image behind a stationary image in Swift

I am trying to port an artificial horizon app I wrote for a PC in c# to swift. It has a bezel image which does not move and behind it is a horizon image which can move up and down behind the bezel. The "window" part of the bezel is yellow so in c# I just made the yellow opaque.
In swift I stated with the horizon inside of a UIScrollView but I'm not sure how to get that to work with a second image that should not scroll.
Not all that up to speed on this swift stuff, can someone point me in the right direction?
let view: UIView = UIView.init(frame: CGRect(x: 0, y: 0, width: 500, height: 500))
let scrollView = UIScrollView.init(frame: view.bounds)
view.addSubview(scrollView)
let backImage: UIImage = fromColor(UIColor.redColor(), size: CGSize(width: 1000, height: 1000))
let backImageView: UIImageView = UIImageView.init(image: backImage)
scrollView.addSubview(backImageView)
scrollView.contentSize = CGSize.init(width: backImage.size.width, height: backImage.size.height)
let frontImage: UIImage = fromColor(UIColor.blueColor(), size: CGSize(width: 100, height: 100))
let layer: CALayer = CALayer.init()
layer.frame = CGRect.init(x: view.center.x - 50, y: view.center.y - 50, width: 100, height: 100)
layer.contents = frontImage.CGImage
view.layer.addSublayer(layer)
func fromColor(color: UIColor, size: CGSize) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillRect(context, rect)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
fromColor is a helper method.
Result of the code

Merging two UIImage's

I want to merge two UIImage's but I'm having some difficulties. I'm playing around with a drawing app where the user when she or he is done will merge their final drawing with their original image they wanted to draw on.
topImage and bottomImage are both final images and need to be merged with their corresponding aspectFit aka ratio.
Then the user can at the end combine both images into one but the thing is that bottomImageView's UIImage will vary in size.
I've been trying to merge them without effecting the newImage or the Aspect ratio (fit)
Set up:
I have two UIImageView which are set to (width: 310, height: 400)
bottomImageView
bottomImage is JPEG: size ( 600, 800) //other images will vary in ratio
topImageView
topImage is PNG (for transparency): size (310, 400)
The bottomImage which is the part of that facebook post is the what the user will theoretically "draw". The topImage (red lines/circle) is the what should be merged with the bottom image.
How can I merge them together without changing the size of the UIImage's?
I've tried two different ways but to no avail.
way 1:
let size = CGSize(width: 300, height: 400)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage.drawInRect(areaSize)
topImage.drawInRect(areaSize, blendMode: CGBlendMode.Normal, alpha: 1.0)
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Way 1, doesn't work because, it creates a new canvas (300, 400) which ultimately alters the size which distorts both my bottomImage and topImage.
Way 2:
let bottomImageView: UIImageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 310, height: 400))
let topImageView: UIImageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 310, height: 400))
bottomImageView.image = topImage
topImageView.image = bottomImage
bottomImageView.contentMode = .ScaleAspectFit
topImageView.contentMode = .ScaleAspectFit
UIGraphicsBeginImageContext(bottomImageView.frame.size)
bottomImageView.image?.drawInRect(CGRect(x: 0, y: 0, width: bottomImageView.frame.size.width, height: bottomImageView.frame.size.height), blendMode: CGBlendMode.Normal, alpha: 1.0)
topImageView.image?.drawInRect(CGRect(x:0, y: 0, width: topImageView.frame.size.width, height: topImageView.frame.size.height), blendMode: CGBlendMode.Normal, alpha: 1.0)
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsGetImageFromCurrentImageContext()
Way 2: I thought that by putting two UIImageViews then "pressing" them together would work but it ended up distorting my newImage like way 1.
Is it possible to combine them without effecting making the newImage look distorted?

Resources