how to get one high quality image including subview image on scrollview - ios

I am trying image crop tutorial
I want to make app that crop image and add sticker.
subviews are imageview’s subview ( imageview.addsubview(subview) )
please check this Image link
Everything is fine, but I can’t make final image.
I want to make final image using original resolution but my fuction’s result is too low resolution
I lost 5 days about this
example)
scrollview’s rect 0, 0, 200, 400
imageview size 7680 x 4320
I tried below code.
but result quality is very low. because the resolution is 200x400(scrollview’s rect) .
func screenshot() -> UIImage {
guard let imageview = imageview else { return UIImage() }
UIGraphicsBeginImageContextWithOptions(self.scrollView.bounds.size, false, UIScreen.main.scale)
let offset = self.scrollView.contentOffset
guard let thisContext = UIGraphicsGetCurrentContext() else { return UIImage() }
thisContext.translateBy(x: -offset.x, y: -offset.y)
self.scrollView.layer.render(in: thisContext)
let visibleScrollViewImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return visibleScrollViewImage ?? UIImage()
}
please let me know how to high quality resolution final image including sticker subviews
how to get one high quality image using

Using that method, you get an image of what is rendered on the screen.
What you want to do is calculate the "sticker" size and placement relative to the scaled image and then combine the images.
You can use this extension to overlay one image on another:
extension UIImage {
func overlayWith(image: UIImage, posX: CGFloat, posY: CGFloat) -> UIImage {
let rFormat = UIGraphicsImageRendererFormat()
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
let newImage = renderer.image {
(context) in
self.draw(at: .zero)
image.draw(at: CGPoint(x: posX, y: posY))
}
return newImage
}
}
Use it by calling (for example):
let combinedImage = bkgImage.overlayWith(image: stickerImage, posX: 1000.0, posY: 1000.0)
That says "Create a new UIImage by overlaying the sticker image onto the background image at x: 1000, y:1000"
Remember that you'll need to translate your coordinates... So, if you are showing your 7680 x 4320 image in a 200 x 400 scrollview (not taking any zooming into account here), the on-screen image size will be 711 x 400. If the user places the sticker at 100, 50, the actual position on the original size image will be:
let scaleFactor = bkgImage.size.height / 400.0
let x = 100.0 * scaleFactor
let y = 50.0 * scaleFactor
// scaleFactor equals 10.8
// x equals 100 * 10.8 == 1080
// y equals 50 * 10.8 == 540
let combinedImage = bkgImage.overlayWith(image: stickerImage, posX: x, posY: y)
Here is a basic sample you can try out.
It starts with a background image of 5120 x 2880:
and a "sticker" image at 512 x 512:
And the result, with the sticker placed at x: 1000, y:1000. Top image is original background (aspectFit), middle image is the combined image (again, aspectFit), and the bottom image is the actual size combined image in a scroll view:
Use this source code to run that example (all code, no #IBoutlet):
import UIKit
extension UIImage {
func overlayWith(image: UIImage, posX: CGFloat, posY: CGFloat) -> UIImage {
let rFormat = UIGraphicsImageRendererFormat()
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
let newImage = renderer.image {
(context) in
self.draw(at: .zero)
image.draw(at: CGPoint(x: posX, y: posY))
}
return newImage
}
}
class ViewController: UIViewController {
let origImageView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .scaleAspectFit
v.backgroundColor = .yellow
return v
}()
let modImageView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .scaleAspectFit
v.backgroundColor = .yellow
return v
}()
let actualSizeImageView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .topLeft
v.backgroundColor = .yellow
return v
}()
let scrollView: UIScrollView = {
let v = UIScrollView()
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
override func viewDidLoad() {
super.viewDidLoad()
guard let bkgImage = UIImage(named: "background"),
let stickerImage = UIImage(named: "sticker") else {
fatalError("missing images")
}
view.backgroundColor = .systemGreen
view.addSubview(origImageView)
view.addSubview(modImageView)
view.addSubview(scrollView)
scrollView.addSubview(actualSizeImageView)
let g = view.safeAreaLayoutGuide
let sg = scrollView.contentLayoutGuide
NSLayoutConstraint.activate([
origImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 10.0),
origImageView.centerXAnchor.constraint(equalTo: g.centerXAnchor, constant: 0.0),
origImageView.widthAnchor.constraint(equalToConstant: 200.0),
origImageView.heightAnchor.constraint(equalToConstant: 120.0),
modImageView.topAnchor.constraint(equalTo: origImageView.bottomAnchor, constant: 10.0),
modImageView.centerXAnchor.constraint(equalTo: origImageView.centerXAnchor, constant: 0.0),
modImageView.widthAnchor.constraint(equalTo: origImageView.widthAnchor),
modImageView.heightAnchor.constraint(equalTo: origImageView.heightAnchor),
scrollView.topAnchor.constraint(equalTo: modImageView.bottomAnchor, constant: 10.0),
scrollView.centerXAnchor.constraint(equalTo: origImageView.centerXAnchor, constant: 0.0),
scrollView.widthAnchor.constraint(equalTo: g.widthAnchor, constant: -10.0),
scrollView.bottomAnchor.constraint(equalTo: g.bottomAnchor, constant: -10.0),
actualSizeImageView.topAnchor.constraint(equalTo: sg.topAnchor),
actualSizeImageView.bottomAnchor.constraint(equalTo: sg.bottomAnchor),
actualSizeImageView.leadingAnchor.constraint(equalTo: sg.leadingAnchor),
actualSizeImageView.trailingAnchor.constraint(equalTo: sg.trailingAnchor),
actualSizeImageView.widthAnchor.constraint(equalToConstant: bkgImage.size.width),
actualSizeImageView.heightAnchor.constraint(equalToConstant: bkgImage.size.height),
])
origImageView.image = bkgImage
let combinedImage = bkgImage.overlayWith(image: stickerImage, posX: 1000.0, posY: 1000.0)
modImageView.image = combinedImage
actualSizeImageView.image = combinedImage
}
}

Related

Stroke image border in SwiftUI

I'm trying to recreate Apple's festival lights image in SwiftUI (screenshot from Apple India's website). Expected result:
Here's what I've managed to achieve so far:
MY UNDERSTANDING SO FAR: Images are not Shapes, so we can't stroke their borders, but I also found that shadow() modifier places shadows on image borders just fine. So, I need a way to customize the shadow somehow and understand how it works.
WHAT I'VE TRIED SO FAR: Besides the code above, I tried to unsuccessfully convert a given SF Symbol to a Shape using Vision framework's contour detection, based on my understanding of this article: https://www.iosdevie.com/p/new-in-ios-14-vision-contour-detection
Can someone please guide me on how I would go about doing this, preferably using SF symbols only.
Looks like the Vision contour detection isn't a bad approach after all. I was just missing a few things, as helpfully pointed out by #DonMag. Here's my final answer using SwiftUI, in case someone's interested.
First, we create an InsettableShape:
struct MKSymbolShape: InsettableShape {
var insetAmount = 0.0
let systemName: String
var trimmedImage: UIImage {
let cfg = UIImage.SymbolConfiguration(pointSize: 256.0)
// get the symbol
guard let imgA = UIImage(systemName: systemName, withConfiguration: cfg)?.withTintColor(.black, renderingMode: .alwaysOriginal) else {
fatalError("Could not load SF Symbol: \(systemName)!")
}
// we want to "strip" the bounding box empty space
// get a cgRef from imgA
guard let cgRef = imgA.cgImage else {
fatalError("Could not get cgImage!")
}
// create imgB from the cgRef
let imgB = UIImage(cgImage: cgRef, scale: imgA.scale, orientation: imgA.imageOrientation)
.withTintColor(.black, renderingMode: .alwaysOriginal)
// now render it on a white background
let resultImage = UIGraphicsImageRenderer(size: imgB.size).image { ctx in
UIColor.white.setFill()
ctx.fill(CGRect(origin: .zero, size: imgB.size))
imgB.draw(at: .zero)
}
return resultImage
}
func path(in rect: CGRect) -> Path {
// cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
// so we want to scale the path to our view bounds
let inputImage = self.trimmedImage
guard let cgPath = detectVisionContours(from: inputImage) else { return Path() }
let scW: CGFloat = (rect.width - CGFloat(insetAmount)) / cgPath.boundingBox.width
let scH: CGFloat = (rect.height - CGFloat(insetAmount)) / cgPath.boundingBox.height
// we need to invert the Y-coordinate space
var transform = CGAffineTransform.identity
.scaledBy(x: scW, y: -scH)
.translatedBy(x: 0.0, y: -cgPath.boundingBox.height)
if let imagePath = cgPath.copy(using: &transform) {
return Path(imagePath)
} else {
return Path()
}
}
func inset(by amount: CGFloat) -> some InsettableShape {
var shape = self
shape.insetAmount += amount
return shape
}
func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
let contourRequest = VNDetectContoursRequest()
contourRequest.revision = VNDetectContourRequestRevision1
contourRequest.contrastAdjustment = 1.0
contourRequest.maximumImageDimension = 512
let requestHandler = VNImageRequestHandler(ciImage: inputImage, options: [:])
try! requestHandler.perform([contourRequest])
if let contoursObservation = contourRequest.results?.first {
return contoursObservation.normalizedPath
}
return nil
}
}
Then we create our main view:
struct PreviewView: View {
var body: some View {
ZStack {
LinearGradient(colors: [.black, .purple], startPoint: .top, endPoint: .bottom)
.edgesIgnoringSafeArea(.all)
MKSymbolShape(systemName: "applelogo")
.stroke(LinearGradient(colors: [.yellow, .orange, .pink, .red], startPoint: .top, endPoint: .bottom), style: StrokeStyle(lineWidth: 8, lineCap: .round, dash: [2.0, 21.0]))
.aspectRatio(CGSize(width: 30, height: 36), contentMode: .fit)
.padding()
}
}
}
Final look:
We can use the Vision framework with VNDetectContourRequestRevision1 to get a cgPath:
func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
let contourRequest = VNDetectContoursRequest.init()
contourRequest.revision = VNDetectContourRequestRevision1
contourRequest.contrastAdjustment = 1.0
contourRequest.maximumImageDimension = 512
let requestHandler = VNImageRequestHandler.init(ciImage: inputImage, options: [:])
try! requestHandler.perform([contourRequest])
if let contoursObservation = contourRequest.results?.first {
return contoursObservation.normalizedPath
}
return nil
}
The path will be based on a 0,0 1.0,1.0 coordinate space, so to use it we need to scale the path to our desired size. It also uses inverted Y-coordinates, so we'll need to flip it also:
// cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
// so we want to scale the path to our view bounds
let scW: CGFloat = targetRect.bounds.width / cgPth.boundingBox.width
let scH: CGFloat = targetRect.bounds.height / cgPth.boundingBox.height
// we need to invert the Y-coordinate space
var transform = CGAffineTransform.identity
.scaledBy(x: scW, y: -scH)
.translatedBy(x: 0.0, y: -cgPth.boundingBox.height)
return cgPth.copy(using: &transform)
Couple notes...
When using UIImage(systemName: "applelogo"), we get an image with "font" characteristics - namely, empty space. See this https://stackoverflow.com/a/71743787/6257435 and this https://stackoverflow.com/a/66293917/6257435 for some discussion.
So, we could use it directly, but it makes the path scaling and translation a bit complex.
So, instead of this "default":
we can use a little code to "trim" the space for a more usable image:
Then we can use the path from Vision as the path of a CAShapeLayer, along with these layer properties: .lineCap = .round / .lineWidth = 8 / .lineDashPattern = [2.0, 20.0] (for example) to get a "dotted line" stroke:
Then we can use that same path on a shape layer as a mask on a gradient layer:
and finally remove the image view so we see only the view with the masked gradient layer:
Here's example code to produce that:
import UIKit
import Vision
class ViewController: UIViewController {
let myOutlineView = UIView()
let myGradientView = UIView()
let shapeLayer = CAShapeLayer()
let gradientLayer = CAGradientLayer()
let defaultImageView = UIImageView()
let trimmedImageView = UIImageView()
var defaultImage: UIImage!
var trimmedImage: UIImage!
var visionPath: CGPath!
// an information label
let infoLabel: UILabel = {
let v = UILabel()
v.backgroundColor = UIColor(white: 0.95, alpha: 1.0)
v.textAlignment = .center
v.numberOfLines = 0
return v
}()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBlue
// get the system image at 240-points (so we can get a good path from Vision)
// experiment with different sizes if the path doesn't appear smooth
let cfg = UIImage.SymbolConfiguration(pointSize: 240.0)
// get "applelogo" symbol
guard let imgA = UIImage(systemName: "applelogo", withConfiguration: cfg)?.withTintColor(.black, renderingMode: .alwaysOriginal) else {
fatalError("Could not load SF Symbol: applelogo!")
}
// now render it on a white background
self.defaultImage = UIGraphicsImageRenderer(size: imgA.size).image { ctx in
UIColor.white.setFill()
ctx.fill(CGRect(origin: .zero, size: imgA.size))
imgA.draw(at: .zero)
}
// we want to "strip" the bounding box empty space
// get a cgRef from imgA
guard let cgRef = imgA.cgImage else {
fatalError("Could not get cgImage!")
}
// create imgB from the cgRef
let imgB = UIImage(cgImage: cgRef, scale: imgA.scale, orientation: imgA.imageOrientation)
.withTintColor(.black, renderingMode: .alwaysOriginal)
// now render it on a white background
self.trimmedImage = UIGraphicsImageRenderer(size: imgB.size).image { ctx in
UIColor.white.setFill()
ctx.fill(CGRect(origin: .zero, size: imgB.size))
imgB.draw(at: .zero)
}
defaultImageView.image = defaultImage
defaultImageView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(defaultImageView)
trimmedImageView.image = trimmedImage
trimmedImageView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(trimmedImageView)
myOutlineView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(myOutlineView)
myGradientView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(myGradientView)
// next step button
let btn = UIButton()
btn.setTitle("Next Step", for: [])
btn.setTitleColor(.white, for: .normal)
btn.setTitleColor(.lightGray, for: .highlighted)
btn.backgroundColor = .systemRed
btn.layer.cornerRadius = 8
btn.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(btn)
infoLabel.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(infoLabel)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
// inset default image view 20-points on each side
// height proportional to the image
// near the top
defaultImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
defaultImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
defaultImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
defaultImageView.heightAnchor.constraint(equalTo: defaultImageView.widthAnchor, multiplier: defaultImage.size.height / defaultImage.size.width),
// inset trimmed image view 40-points on each side
// height proportional to the image
// centered vertically
trimmedImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 40.0),
trimmedImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 40.0),
trimmedImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -40.0),
trimmedImageView.heightAnchor.constraint(equalTo: trimmedImageView.widthAnchor, multiplier: self.trimmedImage.size.height / self.trimmedImage.size.width),
// add outline view on top of trimmed image view
myOutlineView.topAnchor.constraint(equalTo: trimmedImageView.topAnchor, constant: 0.0),
myOutlineView.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
myOutlineView.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
myOutlineView.bottomAnchor.constraint(equalTo: trimmedImageView.bottomAnchor, constant: 0.0),
// add gradient view on top of trimmed image view
myGradientView.topAnchor.constraint(equalTo: trimmedImageView.topAnchor, constant: 0.0),
myGradientView.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
myGradientView.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
myGradientView.bottomAnchor.constraint(equalTo: trimmedImageView.bottomAnchor, constant: 0.0),
// button and info label below
btn.topAnchor.constraint(equalTo: defaultImageView.bottomAnchor, constant: 20.0),
btn.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
btn.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
infoLabel.topAnchor.constraint(equalTo: btn.bottomAnchor, constant: 20.0),
infoLabel.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
infoLabel.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
infoLabel.heightAnchor.constraint(greaterThanOrEqualToConstant: 60.0),
])
// setup the shape layer
shapeLayer.strokeColor = UIColor.red.cgColor
shapeLayer.fillColor = UIColor.clear.cgColor
// this will give use round dots for the shape layer's stroke
shapeLayer.lineCap = .round
shapeLayer.lineWidth = 8
shapeLayer.lineDashPattern = [2.0, 20.0]
// setup the gradient layer
let c1: UIColor = .init(red: 0.95, green: 0.73, blue: 0.32, alpha: 1.0)
let c2: UIColor = .init(red: 0.95, green: 0.25, blue: 0.45, alpha: 1.0)
gradientLayer.colors = [c1.cgColor, c2.cgColor]
myOutlineView.layer.addSublayer(shapeLayer)
myGradientView.layer.addSublayer(gradientLayer)
btn.addTarget(self, action: #selector(nextStep), for: .touchUpInside)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let pth = pathSetup()
else {
fatalError("Vision could not create path")
}
self.visionPath = pth
shapeLayer.path = pth
gradientLayer.frame = myGradientView.bounds.insetBy(dx: -8.0, dy: -8.0)
let gradMask = CAShapeLayer()
gradMask.strokeColor = UIColor.red.cgColor
gradMask.fillColor = UIColor.clear.cgColor
gradMask.lineCap = .round
gradMask.lineWidth = 8
gradMask.lineDashPattern = [2.0, 20.0]
gradMask.path = pth
gradMask.position.x += 8.0
gradMask.position.y += 8.0
gradientLayer.mask = gradMask
nextStep()
}
var idx: Int = -1
#objc func nextStep() {
idx += 1
switch idx % 5 {
case 1:
defaultImageView.isHidden = true
trimmedImageView.isHidden = false
infoLabel.text = "\"applelogo\" system image - with trimmed empty-space bounding-box."
case 2:
myOutlineView.isHidden = false
shapeLayer.opacity = 1.0
infoLabel.text = "Dotted outline shape using Vision detected path."
case 3:
myOutlineView.isHidden = true
myGradientView.isHidden = false
infoLabel.text = "Use Dotted outline shape as a gradient layer mask."
case 4:
trimmedImageView.isHidden = true
view.backgroundColor = .black
infoLabel.text = "View by itself with Dotted outline shape as a gradient layer mask."
default:
view.backgroundColor = .systemBlue
defaultImageView.isHidden = false
trimmedImageView.isHidden = true
myOutlineView.isHidden = true
myGradientView.isHidden = true
shapeLayer.opacity = 0.0
infoLabel.text = "Default \"applelogo\" system image - note empty-space bounding-box."
}
}
func pathSetup() -> CGPath? {
// get the cgPath from the image
guard let cgPth = detectVisionContours(from: self.trimmedImage)
else {
print("Failed to get path!")
return nil
}
// cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
// so we want to scale the path to our view bounds
let scW: CGFloat = myOutlineView.bounds.width / cgPth.boundingBox.width
let scH: CGFloat = myOutlineView.bounds.height / cgPth.boundingBox.height
// we need to invert the Y-coordinate space
var transform = CGAffineTransform.identity
.scaledBy(x: scW, y: -scH)
.translatedBy(x: 0.0, y: -cgPth.boundingBox.height)
return cgPth.copy(using: &transform)
}
func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
let contourRequest = VNDetectContoursRequest.init()
contourRequest.revision = VNDetectContourRequestRevision1
contourRequest.contrastAdjustment = 1.0
contourRequest.maximumImageDimension = 512
let requestHandler = VNImageRequestHandler.init(ciImage: inputImage, options: [:])
try! requestHandler.perform([contourRequest])
if let contoursObservation = contourRequest.results?.first {
return contoursObservation.normalizedPath
}
return nil
}
}

How to remove blur effect from image in iOS swift

I have an image on which I applied blur effect using the following code:
#IBAction func blurSliderSlides(_ sender: UISlider) {
let currentValue = Int(sender.value)
let currentFilter = CIFilter(name: "CIGaussianBlur")
currentFilter!.setValue(CIImage(image: mainImage), forKey: kCIInputImageKey)
currentFilter!.setValue(currentValue, forKey: kCIInputRadiusKey)
let cropFilter = CIFilter(name: "CICrop")
cropFilter!.setValue(currentFilter!.outputImage, forKey: kCIInputImageKey)
cropFilter!.setValue(CIVector(cgRect: (CIImage(image: mainImage)?.extent)!), forKey: "inputRectangle")
let output = cropFilter!.outputImage
let cgimg = context.createCGImage(output!, from: output!.extent)
processedImage = UIImage(cgImage: cgimg!)
backgroundImage.image = processedImage
}
Now I have three buttons square circle and rectangle. when ever use tap any of button that type of view is created on mid of blurred image and user can make that view large or small using slider. but all I want when subview is added of any shape the background image should be unblur from that point where view is created.
subview created on background image:
#IBAction func squareButtonTapped(_ sender: Any) {
squareView.removeFromSuperview()
squareView.frame = CGRect(x: backgroundImage.bounds.midX, y: backgroundImage.bounds.midY, width: CGFloat(backgroundImage.frame.size.height * 20 / 100), height: CGFloat(backgroundImage.frame.size.width * 10 / 100))
backgroundImage.addSubview(squareView)
}
using slider subview can be changed:
#IBAction func blurTypeSliderSlides(_ sender: UISlider) {
squareView.transform = CGAffineTransform(scaleX: CGFloat(sender.value / 10), y: CGFloat(sender.value / 10))
}
how to remove blur effect from background image at point where subview is added.
I have searched a lot but nothing find helpful as my requirement. can someone please help. Thanks in advance.
It looks like you have a UIImage and a UIImageView declared as a property of your controller... and in viewDidLoad() you're loading that image, something like:
mainImage = UIImage(named: "background")
I'm guessing that, because in your func blurSliderSlides(_ sender: UISlider) you have this line:
currentFilter!.setValue(CIImage(image: mainImage), forKey: kCIInputImageKey)
and at the end:
backgroundImage.image = processedImage
So, when you add the "shape" subview, set the .image to the original:
#IBAction func squareButtonTapped(_ sender: Any) {
squareView.removeFromSuperview()
squareView.frame = CGRect(x: backgroundImage.bounds.midX, y: backgroundImage.bounds.midY, width: CGFloat(backgroundImage.frame.size.height * 20 / 100), height: CGFloat(backgroundImage.frame.size.width * 10 / 100))
backgroundImage.addSubview(squareView)
// replace the "blur" image with the original
backgroundImage.image = mainImage
}
Edit - after clarification in comments...
You don't want to think in terms of "adding subviews."
Instead, use two image views... one containing the original image, and another containing the blurred image, overlaid on top. Then use a layer mask (with a "hole cut") on the blurred image view to let the original "show through."
So,
And it can look like this at run-time:
Here is some example code you can try out. It has one slider which controls the "cut-out oval" as a percentage of the image view width:
class BlurMaskVC: UIViewController {
var mainImage: UIImage!
var originalImageView: UIImageView!
var blurredImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
// make sure we can load the image
guard let img = UIImage(named: "bkg640x360") else {
fatalError("Could not load image!!!")
}
mainImage = img
originalImageView = UIImageView()
blurredImageView = UIImageView()
originalImageView.image = mainImage
blurredImageView.image = mainImage
originalImageView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(originalImageView)
blurredImageView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(blurredImageView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
// constrain original image view Top / Leading / Trailing
originalImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 0.0),
originalImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 0.0),
originalImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: 0.0),
// let's use the image's aspect ratio
originalImageView.heightAnchor.constraint(equalTo: originalImageView.widthAnchor, multiplier: mainImage.size.height / mainImage.size.width),
// constrain blurred image view to match the original image view
// so it's overlaid directly on top
blurredImageView.topAnchor.constraint(equalTo: originalImageView.topAnchor, constant: 0.0),
blurredImageView.leadingAnchor.constraint(equalTo: originalImageView.leadingAnchor, constant: 0.0),
blurredImageView.trailingAnchor.constraint(equalTo: originalImageView.trailingAnchor, constant: 0.0),
blurredImageView.bottomAnchor.constraint(equalTo: originalImageView.bottomAnchor, constant: 0.0),
])
// a slider to set the "percentage" of the image view to "un-blur"
let areaSlider = UISlider()
areaSlider.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(areaSlider)
NSLayoutConstraint.activate([
areaSlider.topAnchor.constraint(equalTo: originalImageView.bottomAnchor, constant: 20.0),
areaSlider.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
areaSlider.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
])
areaSlider.addTarget(self, action: #selector(updateBlurMask(_:)), for: .valueChanged)
// since this example does not have a "blur" slider,
// let's set the blur to 20
doBlur(20)
}
func doBlur(_ currentValue: Int) {
let context = CIContext(options: nil)
guard let inputImage = CIImage(image: mainImage) else {
fatalError("Could not get CIImage from mainImage!!!")
}
let originalOrientation = mainImage.imageOrientation
let originalScale = mainImage.scale
if let filter = CIFilter(name: "CIGaussianBlur") {
filter.setValue(inputImage, forKey: kCIInputImageKey)
filter.setValue(currentValue, forKey: kCIInputRadiusKey)
guard let outputImage = filter.outputImage,
let cgImage = context.createCGImage(outputImage, from: inputImage.extent)
else {
fatalError("Could not generate Processed Image!!!")
}
let processedImage = UIImage(cgImage: cgImage, scale: originalScale, orientation: originalOrientation)
blurredImageView.image = processedImage
}
}
#objc func updateBlurMask(_ sender: UISlider) {
let b: CGRect = blurredImageView.bounds
// let's make a "square" rect with the max of blurImageView's width, height
let m: CGFloat = max(b.width, b.height)
let maxR: CGRect = CGRect(x: 0.0, y: 0.0, width: m, height: m)
// use the value of the slider - 0.0. to 1.0
// as a percentage of the width
// to scale the max rect
let v: CGFloat = CGFloat(sender.value)
let tr = CGAffineTransform(scaleX: v, y: v)
var r: CGRect = maxR.applying(tr)
// center it
r.origin.x = (b.width - r.width) * 0.5
r.origin.y = (b.height - r.height) * 0.5
// a path for the full image view rect
let fullPath = UIBezierPath(rect: blurredImageView.bounds)
// a path for the oval in a percentage of the full rect
let pth = UIBezierPath(ovalIn: r)
// append the paths
fullPath.append(pth)
// this "cuts a hole" in the path
fullPath.usesEvenOddFillRule = true
// shape layer to use as a mask
let maskLayer = CAShapeLayer()
// again, we're going to "cut a hole" in the path
maskLayer.fillRule = .evenOdd
// set the path
maskLayer.path = fullPath.cgPath
// can be any opaque color
maskLayer.fillColor = UIColor.white.cgColor
// set the layer mask
blurredImageView.layer.mask = maskLayer
}
}
You should have no trouble using the updateBlurMask() func as a basis for your other shapes.

UIImageView added as subview in an UIView with clipsToBounds is not working

I have a UIView with a UIImageView as subview added, the UIImageView is a texture that repeats. The UIView width and height are correct, but the image is out of the size. I added the ClipsToBounds, but it's not clipping the image at all. Is there a specific order or what am I doing wrong the image is not clipped inside it's parent view?
let rectangleView = UIView(frame: CGRect(x: x, y: y, width: width, height: height))
rectangleView.isUserInteractionEnabled = false
if let texturesUrl = layout.Url, let url = texturesUrl.isValidURL() ? URL(string: texturesUrl) : URL(string: String(format: AppManager.shared.baseTexturesUrl, texturesUrl)) {
let widthLimit = scale * CGFloat(layout.Width ?? 0)
let heightLimit = scale * CGFloat(layout.Height ?? 0)
let widthStep = scale * CGFloat(layout.TileWidth ?? layout.Width ?? 0)
let heightStep = scale * CGFloat(layout.TileHeight ?? layout.Height ?? 0)
var locY = CGFloat(0)
let size = CGSize(width: widthStep, height: heightStep)
if widthLimit > 0, heightLimit > 0 {
while locY < heightLimit {
var locX = CGFloat(0)
while locX < widthLimit {
let imageView = UIImageView()
rectangleView.addSubview(imageView)
imageView.contentMode = .scaleAspectFill
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.clipsToBounds = true
imageView.isUserInteractionEnabled = false
imageView.anchor(top: rectangleView.topAnchor, leading: rectangleView.leadingAnchor, bottom: nil, trailing: nil, padding: UIEdgeInsets(top: locY, left: locX, bottom: 0, right: 0), size: size)
imageView.setImage(with: url, size: size)
locX += widthStep
}
locY += heightStep
}
}
}
You don't need to add so many image views, just use it as a repeating background:
rectangleView.backgroundColor = UIColor(patternImage: myImage)
See documentation for UIColor(patternImage:).
You can do this much more efficiently with CAReplicatorLayer.
Here's a quick example:
class TileExampleViewController: UIViewController {
let tiledView = UIView()
override func viewDidLoad() {
super.viewDidLoad()
tiledView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(tiledView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
tiledView.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
tiledView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
tiledView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
tiledView.bottomAnchor.constraint(equalTo: g.bottomAnchor, constant: -20.0),
])
}
override func viewDidLayoutSubviews() {
// we want to do this here, when we know the
// size / frame of the tiledView
// make sure we can load the image
guard let tileImage = UIImage(named: "tileSquare") else { return }
// let's just pick 80 x 80 for the tile size
let tileSize: CGSize = CGSize(width: 80.0, height: 80.0)
// create a "horizontal" replicator layer
let hReplicatorLayer = CAReplicatorLayer()
hReplicatorLayer.frame.size = tiledView.frame.size
hReplicatorLayer.masksToBounds = true
// create a "vertical" replicator layer
let vReplicatorLayer = CAReplicatorLayer()
vReplicatorLayer.frame.size = tiledView.frame.size
vReplicatorLayer.masksToBounds = true
// create a layer to hold the image
let imageLayer = CALayer()
imageLayer.contents = tileImage.cgImage
imageLayer.frame.size = tileSize
// add the imageLayer to the horizontal replicator layer
hReplicatorLayer.addSublayer(imageLayer)
// add the horizontal replicator layer to the vertical replicator layer
vReplicatorLayer.addSublayer(hReplicatorLayer)
// how many "tiles" do we need to fill the width
let hCount = tiledView.frame.width / tileSize.width
hReplicatorLayer.instanceCount = Int(ceil(hCount))
// Shift each image instance right by tileSize width
hReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(
tileSize.width, 0, 0
)
// how many "rows" do we need to fill the height
let vCount = tiledView.frame.height / tileSize.height
vReplicatorLayer.instanceCount = Int(ceil(vCount))
// shift each "row" down by tileSize height
vReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(
0, tileSize.height, 0
)
// add the vertical replicator layer as a sublayer
tiledView.layer.addSublayer(vReplicatorLayer)
}
}
I used this tile image:
and we get this result with let tileSize: CGSize = CGSize(width: 80.0, height: 80.0):
with let tileSize: CGSize = CGSize(width: 120.0, height: 160.0):
with let tileSize: CGSize = CGSize(width: 40.0, height: 40.0):

How to take high-quality screenshot with UIGraphicsImageRenderer programmatically?

PROBLEM: After I take screenshot the image is blurry when check by zooming. The text inside image seems to be blurred when zoomed.
I know this question have been raised many a times but none of them have desired solution. I already checked quite a few post like this one
All the solution been shared so far on this forum are repeated or same in any other way but none of them has a solution for the problem.
Here is what I am doing:
extension UIView {
func asImage() -> UIImage? {
let format = UIGraphicsImageRendererFormat()
format.opaque = self.isOpaque
let renderer = UIGraphicsImageRenderer(bounds: bounds,format: format)
return renderer.image(actions: { rendererContext in
layer.render(in: rendererContext.cgContext)
})
}
//The other option using UIGraphicsEndImageContext
func asImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
self.layer.render(in: context)
return UIGraphicsGetImageFromCurrentImageContext()
}
return nil
}
}
The above function will convert UIView into and image but the image quality returned is not up-to the mark.
You won't get your desired results by doing a UIView "image capture."
When you zoom a UIScrollView it does not perform a vector scaling... it performs a rasterized scaling.
You can easily confirm this by using a UILabel as the viewForZooming. Here is a label with 30-point system font...
at 1x zoom:
at 10x zoom:
Code for that example:
class ViewController: UIViewController, UIScrollViewDelegate {
let zoomLabel: UILabel = UILabel()
let scrollView: UIScrollView = UIScrollView()
override func viewDidLoad() {
super.viewDidLoad()
[zoomLabel, scrollView].forEach {
$0.translatesAutoresizingMaskIntoConstraints = false
}
scrollView.addSubview(zoomLabel)
view.addSubview(scrollView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
scrollView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
scrollView.centerYAnchor.constraint(equalTo: g.centerYAnchor),
scrollView.widthAnchor.constraint(equalToConstant: 300.0),
scrollView.heightAnchor.constraint(equalToConstant: 200.0),
zoomLabel.topAnchor.constraint(equalTo: scrollView.contentLayoutGuide.topAnchor),
zoomLabel.leadingAnchor.constraint(equalTo: scrollView.contentLayoutGuide.leadingAnchor),
zoomLabel.trailingAnchor.constraint(equalTo: scrollView.contentLayoutGuide.trailingAnchor),
zoomLabel.bottomAnchor.constraint(equalTo: scrollView.contentLayoutGuide.bottomAnchor),
])
zoomLabel.textColor = .red
zoomLabel.backgroundColor = .yellow
zoomLabel.font = UIFont.systemFont(ofSize: 30.0, weight: .regular)
zoomLabel.text = "Sample Text"
scrollView.delegate = self
scrollView.minimumZoomScale = 1
scrollView.maximumZoomScale = 10
view.backgroundColor = UIColor(white: 0.9, alpha: 1.0)
scrollView.backgroundColor = .white
}
func viewForZooming(in scrollView: UIScrollView) -> UIView? {
return zoomLabel
}
}
When you "capture the view content" as a UIImage, you get a bitmap that is the size of the view in points x the screen scale.
So, on an iPhone 8, for example, with #2x screen scale, at 300 x 200 view will be "captured" as a UIImage with 600 x 400 pixels.
Whether you zoom the view itself, or a bitmap-capture of the view, you'll get the same result -- blurry edges when zoomed.
Your comments include: "... while editing image ..." -- this is a common issue, where we want to allow the user to add text (labels), Bezier Path shapes, addition images, etc. What the user sees on the screen, for example, may be an original image of 3000 x 2000 pixels, displayed at 300 x 200 points. Adding a 30-point label might look good on the screen, but then grabbing that as a UIImage (either for zooming or for saving to disk), ends up as a 600 x 400 pixel image which, of course, will not look good at a larger size.
Frequently, the approach to resolve this is along these lines:
Allow the user to edit at screen dimensions, e.g.
show a 3000 x 2000 pixel image scaled down in a 300 x 200 view
add a Bezier Path, oval-in-rect (20, 20, 200, 200)
add a 30-point label at origin (32, 32)
Then, when "capturing" that for output / zooming
take the original 3000 x 2000 pixel image
add a Bezier Path, oval-in-rect (20 * 10, 20 * 10, 200 * 10, 200 * 10)
add a (30 * 10)-point label at origin (32 * 10, 32 * 10)
Another option is to do the on-screen editing scaled-down.
So, you might use a 300 x 200 image view, with your 3000 x 2000 pixel image (scale to fit). When the user says "I want to add an oval Bezier Path in rect (20, 20, 200, 200), your code would draw that oval at rect (20 * 10, 20 * 10, 200 * 10, 200 * 10) on the image itself and then refresh the .image property of the image view.
Here's a little more detailed example to help make things clear:
class ViewController: UIViewController, UIScrollViewDelegate {
let topView: UIView = UIView()
let topLabel: UILabel = UILabel()
let botView: UIView = UIView()
let botLabel: UILabel = UILabel()
let topScrollView: UIScrollView = UIScrollView()
let botScrollView: UIScrollView = UIScrollView()
let topStatLabel: UILabel = UILabel()
let botStatLabel: UILabel = UILabel()
override func viewDidLoad() {
super.viewDidLoad()
[topView, topLabel, botView, botLabel, topScrollView, botScrollView, topStatLabel, botStatLabel].forEach {
$0.translatesAutoresizingMaskIntoConstraints = false
}
topView.addSubview(topLabel)
botView.addSubview(botLabel)
topScrollView.addSubview(topView)
botScrollView.addSubview(botView)
view.addSubview(topStatLabel)
view.addSubview(botStatLabel)
view.addSubview(topScrollView)
view.addSubview(botScrollView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
topStatLabel.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
topStatLabel.leadingAnchor.constraint(equalTo: topScrollView.leadingAnchor),
topScrollView.topAnchor.constraint(equalTo: topStatLabel.bottomAnchor, constant: 4.0),
topScrollView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
topScrollView.widthAnchor.constraint(equalToConstant: 300.0),
topScrollView.heightAnchor.constraint(equalToConstant: 200.0),
botScrollView.topAnchor.constraint(equalTo: topScrollView.bottomAnchor, constant: 12.0),
botScrollView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
botScrollView.widthAnchor.constraint(equalToConstant: 300.0),
botScrollView.heightAnchor.constraint(equalToConstant: 200.0),
botStatLabel.topAnchor.constraint(equalTo: botScrollView.bottomAnchor, constant: 4.0),
botStatLabel.leadingAnchor.constraint(equalTo: botScrollView.leadingAnchor),
topView.widthAnchor.constraint(equalToConstant: 300.0),
topView.heightAnchor.constraint(equalToConstant: 200.0),
botView.widthAnchor.constraint(equalToConstant: 300.0 * 10.0),
botView.heightAnchor.constraint(equalToConstant: 200.0 * 10.0),
topLabel.topAnchor.constraint(equalTo: topView.topAnchor, constant: 8.0),
topLabel.leadingAnchor.constraint(equalTo: topView.leadingAnchor, constant: 8.0),
botLabel.topAnchor.constraint(equalTo: botView.topAnchor, constant: 8.0 * 10.0),
botLabel.leadingAnchor.constraint(equalTo: botView.leadingAnchor, constant: 8.0 * 10.0),
topView.topAnchor.constraint(equalTo: topScrollView.contentLayoutGuide.topAnchor),
topView.leadingAnchor.constraint(equalTo: topScrollView.contentLayoutGuide.leadingAnchor),
topView.trailingAnchor.constraint(equalTo: topScrollView.contentLayoutGuide.trailingAnchor),
topView.bottomAnchor.constraint(equalTo: topScrollView.contentLayoutGuide.bottomAnchor),
botView.topAnchor.constraint(equalTo: botScrollView.contentLayoutGuide.topAnchor),
botView.leadingAnchor.constraint(equalTo: botScrollView.contentLayoutGuide.leadingAnchor),
botView.trailingAnchor.constraint(equalTo: botScrollView.contentLayoutGuide.trailingAnchor),
botView.bottomAnchor.constraint(equalTo: botScrollView.contentLayoutGuide.bottomAnchor),
])
topLabel.textColor = .red
topLabel.backgroundColor = .yellow
topLabel.font = UIFont.systemFont(ofSize: 30.0, weight: .regular)
topLabel.text = "Sample Text"
botLabel.textColor = .red
botLabel.backgroundColor = .yellow
botLabel.font = UIFont.systemFont(ofSize: 30.0 * 10.0, weight: .regular)
botLabel.text = "Sample Text"
topScrollView.delegate = self
topScrollView.minimumZoomScale = 1
topScrollView.maximumZoomScale = 10
botScrollView.delegate = self
botScrollView.minimumZoomScale = 0.1
botScrollView.maximumZoomScale = 1
topScrollView.zoomScale = topScrollView.minimumZoomScale
botScrollView.zoomScale = botScrollView.minimumZoomScale
view.backgroundColor = UIColor(white: 0.9, alpha: 1.0)
topScrollView.backgroundColor = .white
botScrollView.backgroundColor = .white
topStatLabel.font = UIFont.systemFont(ofSize: 14, weight: .light)
topStatLabel.numberOfLines = 0
botStatLabel.font = UIFont.systemFont(ofSize: 14, weight: .light)
botStatLabel.numberOfLines = 0
let t = UITapGestureRecognizer(target: self, action: #selector(self.tapped(_:)))
view.addGestureRecognizer(t)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
updateStatLabels()
}
func updateStatLabels() -> Void {
var sTop = ""
sTop += "Label Point Size: \(topLabel.font.pointSize)"
sTop += "\n"
sTop += "Label Frame: \(topLabel.frame)"
sTop += "\n"
sTop += "View Size: \(topView.bounds.size)"
sTop += "\n"
sTop += "Zoom Scale: \(String(format: "%0.1f", topScrollView.zoomScale))"
var sBot = ""
sBot += "Zoom Scale: \(String(format: "%0.1f", botScrollView.zoomScale))"
sBot += "\n"
sBot += "View Size: \(botView.bounds.size)"
sBot += "\n"
sBot += "Label Frame: \(botLabel.frame)"
sBot += "\n"
sBot += "Label Point Size: \(botLabel.font.pointSize)"
topStatLabel.text = sTop
botStatLabel.text = sBot
}
func viewForZooming(in scrollView: UIScrollView) -> UIView? {
if scrollView == topScrollView {
return topView
}
return botView
}
#objc func tapped(_ g: UITapGestureRecognizer) -> Void {
if Int(topScrollView.zoomScale) == Int(topScrollView.maximumZoomScale) {
topScrollView.zoomScale = topScrollView.minimumZoomScale
} else {
topScrollView.zoomScale += 1
}
topScrollView.contentOffset = .zero
// comparing floating point directly will fail, so round the values
if round(botScrollView.zoomScale * 10) == round(botScrollView.maximumZoomScale * 10) {
botScrollView.zoomScale = botScrollView.minimumZoomScale
} else {
botScrollView.zoomScale += 0.1
}
botScrollView.contentOffset = .zero
updateStatLabels()
}
}
The top scroll view has a 300 x 200 view with a 30-point label, allowing zoomScale from 1 to 10.
The bottom scroll view has a 3000 x 2000 view with a 300-point label, allowing zoomScale from 0.1 to 1.0.
Each time you tap the screen, the scrollViews increase zoomScale by 1 and 0.1 respectively.
And it looks like this at min-scale:
at 5 and 0.5 scale:
and at 10 and 1.0 scale:
I am using this code in one of my apps and seems to work fine. Don't know if its quality is enough for you.
import UIKit
extension UIApplication {
var screenShot: UIImage? {
if let layer = keyWindow?.layer {
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(layer.frame.size, false, scale);
if let context = UIGraphicsGetCurrentContext() {
layer.render(in: context)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return screenshot
}
}
return nil
}
}
I use this extension to create an image from the view UIGraphicsGetCurrentContext() returns a reference to the current graphics context. It will not create one. It is important to remember this, because if you look at it this way, you will find that it does not need the size parameter, because the current context is just the size used when creating the graphics context.
extension UIView {
func toImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.main.scale)
drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}

Change White Pixels in a UIImage

I have a UIImage which is an outline, and one which is filled, both were created from OpenCV, one from grab cut and the other from structuring element.
my two images are like this:
I am trying to change all of the white pixels in the outlined image because I want to merge the two images together so I end up with a red outline and a white filled area in the middle. I am using this to merge the two together, and I am aware instead of red it will be a pink kind of colour and a grey instead of white since I am just blending them together with alpha.
// Start an image context
UIGraphicsBeginImageContext(grabcutResult.size)
// Create rect for this draw session
let rect = CGRect(x: 0.0, y: 0.0, width: grabcutResult.size.width, height: grabcutResult.size.height)
grabcutResult.draw(in: rect)
redGrabcutOutline.draw(in: rect, blendMode: .normal, alpha: 0.5)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
and the idea is it should look something like this.
I want to be able to complete this operation quickly but the only solutions I have found are either for ImageView (which only affects how stuff is rendered not the underlying UIImage) or they involve looping over the entire image pixel by pixel.
I am trying to find a solution that will just mask all the white pixels in the outline to red without having to loop over the entire image pixel by pixel as it is too slow.
Ideally it would be good if I could get openCV to just return a red outline instead of white but I don't think its possible to change this (maybe im wrong).
Using swift btw... but any Help is appreciated, thanks in advance.
This may work for you - using only Swift code...
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskingColors: [CGFloat] = [1, 255, 1, 255, 1, 255]
let bounds = CGRect(origin: .zero, size: size)
let maskImage = cgImage!
var returnImage: UIImage?
// make sure image has no alpha channel
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = true
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
let noAlphaImage = renderer.image {
(context) in
self.draw(at: .zero)
}
let noAlphaCGRef = noAlphaImage.cgImage
if let imgRefCopy = noAlphaCGRef?.copy(maskingColorComponents: maskingColors) {
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = false
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
returnImage = renderer.image {
(context) in
context.cgContext.clip(to: bounds, mask: maskImage)
context.cgContext.setFillColor(color.cgColor)
context.cgContext.fill(bounds)
context.cgContext.draw(imgRefCopy, in: bounds)
}
}
return returnImage
}
}
This extension returns a UIImage with white replaced with the passed UIColor, and the black "background" changed to transparent.
Use it in this manner:
// change filled white star to gray with transparent background
let modFilledImage = filledImage.maskWithColor(color: UIColor(red: 200, green: 200, blue: 200))
// change outlined white star to red with transparent background
let modOutlineImage = outlineImage.maskWithColor(color: UIColor.red)
// combine the images on a black background
Here is a full example, using your two original images (most of the code is setting up image views to show the results):
extension UIImage {
func maskWithColor(color: UIColor) -> UIImage? {
let maskingColors: [CGFloat] = [1, 255, 1, 255, 1, 255]
let bounds = CGRect(origin: .zero, size: size)
let maskImage = cgImage!
var returnImage: UIImage?
// make sure image has no alpha channel
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = true
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
let noAlphaImage = renderer.image {
(context) in
self.draw(at: .zero)
}
let noAlphaCGRef = noAlphaImage.cgImage
if let imgRefCopy = noAlphaCGRef?.copy(maskingColorComponents: maskingColors) {
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = false
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
returnImage = renderer.image {
(context) in
context.cgContext.clip(to: bounds, mask: maskImage)
context.cgContext.setFillColor(color.cgColor)
context.cgContext.fill(bounds)
context.cgContext.draw(imgRefCopy, in: bounds)
}
}
return returnImage
}
}
class MaskWorkViewController: UIViewController {
let origFilledImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let origOutlineImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let modifiedFilledImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let modifiedOutlineImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let combinedImgView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .center
return v
}()
let origStack: UIStackView = {
let v = UIStackView()
v.translatesAutoresizingMaskIntoConstraints = false
v.axis = .horizontal
v.spacing = 20
return v
}()
let modifiedStack: UIStackView = {
let v = UIStackView()
v.translatesAutoresizingMaskIntoConstraints = false
v.axis = .horizontal
v.spacing = 20
return v
}()
let mainStack: UIStackView = {
let v = UIStackView()
v.translatesAutoresizingMaskIntoConstraints = false
v.axis = .vertical
v.alignment = .center
v.spacing = 10
return v
}()
override func viewDidLoad() {
super.viewDidLoad()
guard let filledImage = UIImage(named: "StarFill"),
let outlineImage = UIImage(named: "StarEdge") else {
return
}
var modifiedFilledImage: UIImage = UIImage()
var modifiedOutlineImage: UIImage = UIImage()
var combinedImage: UIImage = UIImage()
// for both original images, replace white with color
// and make black transparent
if let modFilledImage = filledImage.maskWithColor(color: UIColor(red: 200, green: 200, blue: 200)),
let modOutlineImage = outlineImage.maskWithColor(color: UIColor.red) {
modifiedFilledImage = modFilledImage
modifiedOutlineImage = modOutlineImage
let rFormat = UIGraphicsImageRendererFormat()
rFormat.opaque = true
let renderer = UIGraphicsImageRenderer(size: modifiedFilledImage.size, format: rFormat)
// combine modified images on black background
combinedImage = renderer.image {
(context) in
context.cgContext.setFillColor(UIColor.black.cgColor)
context.cgContext.fill(CGRect(origin: .zero, size: modifiedFilledImage.size))
modifiedFilledImage.draw(at: .zero)
modifiedOutlineImage.draw(at: .zero)
}
}
// setup image views and set .image properties
setupUI(filledImage.size)
origFilledImgView.image = filledImage
origOutlineImgView.image = outlineImage
modifiedFilledImgView.image = modifiedFilledImage
modifiedOutlineImgView.image = modifiedOutlineImage
combinedImgView.image = combinedImage
}
func setupUI(_ imageSize: CGSize) -> Void {
origStack.addArrangedSubview(origFilledImgView)
origStack.addArrangedSubview(origOutlineImgView)
modifiedStack.addArrangedSubview(modifiedFilledImgView)
modifiedStack.addArrangedSubview(modifiedOutlineImgView)
var lbl = UILabel()
lbl.textAlignment = .center
lbl.text = "Original Images"
mainStack.addArrangedSubview(lbl)
mainStack.addArrangedSubview(origStack)
lbl = UILabel()
lbl.textAlignment = .center
lbl.numberOfLines = 0
lbl.text = "Modified Images\n(UIImageViews have Green Background)"
mainStack.addArrangedSubview(lbl)
mainStack.addArrangedSubview(modifiedStack)
lbl = UILabel()
lbl.textAlignment = .center
lbl.text = "Combined on Black Background"
mainStack.addArrangedSubview(lbl)
mainStack.addArrangedSubview(combinedImgView)
view.addSubview(mainStack)
NSLayoutConstraint.activate([
mainStack.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20.0),
mainStack.centerXAnchor.constraint(equalTo: view.centerXAnchor, constant: 0.0),
])
[origFilledImgView, origOutlineImgView, modifiedFilledImgView, modifiedOutlineImgView, combinedImgView].forEach {
$0.backgroundColor = .green
NSLayoutConstraint.activate([
$0.widthAnchor.constraint(equalToConstant: imageSize.width),
$0.heightAnchor.constraint(equalToConstant: imageSize.height),
])
}
}
}
And the result, showing the original, modified and final combined image... Image views have green backgrounds to show the transparent areas:
The idea is to bitwise-or the two masks together which "merges" the two masks. Since this new combined grayscale image is still a single (1-channel) image, we need to convert it to a 3-channel so we can apply color to the image. Finally, we color the outline mask with red to get our result
I implemented it in Python OpenCV but you can adapt the same idea with Swift
import cv2
# Read in images as grayscale
full = cv2.imread('1.png', 0)
outline = cv2.imread('2.png', 0)
# Bitwise-or masks
combine = cv2.bitwise_or(full, outline)
# Combine to 3 color channel and color outline red
combine = cv2.merge([combine, combine, combine])
combine[outline > 120] = (57,0,204)
cv2.imshow('combine', combine)
cv2.waitKey()
Benchmarks using IPython
In [3]: %timeit combine()
782 µs ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Using the Vectorized feature of Numpy, it seems to be pretty fast
try this:
public func maskWithColor2( color:UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
color.setFill()
context.translateBy(x: 0, y: self.size.height)
context.scaleBy(x: 1.0, y: -1.0)
let rect = CGRect(x: 0.0, y: 0.0, width: self.size.width, height: self.size.height)
context.draw(self.cgImage!, in: rect)
context.setBlendMode(CGBlendMode.sourceIn)
context.addRect(rect)
context.drawPath(using: CGPathDrawingMode.fill)
let coloredImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return coloredImage!
}

Resources