PROBLEM: After I take screenshot the image is blurry when check by zooming. The text inside image seems to be blurred when zoomed.
I know this question have been raised many a times but none of them have desired solution. I already checked quite a few post like this one
All the solution been shared so far on this forum are repeated or same in any other way but none of them has a solution for the problem.
Here is what I am doing:
extension UIView {
func asImage() -> UIImage? {
let format = UIGraphicsImageRendererFormat()
format.opaque = self.isOpaque
let renderer = UIGraphicsImageRenderer(bounds: bounds,format: format)
return renderer.image(actions: { rendererContext in
layer.render(in: rendererContext.cgContext)
})
}
//The other option using UIGraphicsEndImageContext
func asImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
self.layer.render(in: context)
return UIGraphicsGetImageFromCurrentImageContext()
}
return nil
}
}
The above function will convert UIView into and image but the image quality returned is not up-to the mark.
You won't get your desired results by doing a UIView "image capture."
When you zoom a UIScrollView it does not perform a vector scaling... it performs a rasterized scaling.
You can easily confirm this by using a UILabel as the viewForZooming. Here is a label with 30-point system font...
at 1x zoom:
at 10x zoom:
Code for that example:
class ViewController: UIViewController, UIScrollViewDelegate {
let zoomLabel: UILabel = UILabel()
let scrollView: UIScrollView = UIScrollView()
override func viewDidLoad() {
super.viewDidLoad()
[zoomLabel, scrollView].forEach {
$0.translatesAutoresizingMaskIntoConstraints = false
}
scrollView.addSubview(zoomLabel)
view.addSubview(scrollView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
scrollView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
scrollView.centerYAnchor.constraint(equalTo: g.centerYAnchor),
scrollView.widthAnchor.constraint(equalToConstant: 300.0),
scrollView.heightAnchor.constraint(equalToConstant: 200.0),
zoomLabel.topAnchor.constraint(equalTo: scrollView.contentLayoutGuide.topAnchor),
zoomLabel.leadingAnchor.constraint(equalTo: scrollView.contentLayoutGuide.leadingAnchor),
zoomLabel.trailingAnchor.constraint(equalTo: scrollView.contentLayoutGuide.trailingAnchor),
zoomLabel.bottomAnchor.constraint(equalTo: scrollView.contentLayoutGuide.bottomAnchor),
])
zoomLabel.textColor = .red
zoomLabel.backgroundColor = .yellow
zoomLabel.font = UIFont.systemFont(ofSize: 30.0, weight: .regular)
zoomLabel.text = "Sample Text"
scrollView.delegate = self
scrollView.minimumZoomScale = 1
scrollView.maximumZoomScale = 10
view.backgroundColor = UIColor(white: 0.9, alpha: 1.0)
scrollView.backgroundColor = .white
}
func viewForZooming(in scrollView: UIScrollView) -> UIView? {
return zoomLabel
}
}
When you "capture the view content" as a UIImage, you get a bitmap that is the size of the view in points x the screen scale.
So, on an iPhone 8, for example, with #2x screen scale, at 300 x 200 view will be "captured" as a UIImage with 600 x 400 pixels.
Whether you zoom the view itself, or a bitmap-capture of the view, you'll get the same result -- blurry edges when zoomed.
Your comments include: "... while editing image ..." -- this is a common issue, where we want to allow the user to add text (labels), Bezier Path shapes, addition images, etc. What the user sees on the screen, for example, may be an original image of 3000 x 2000 pixels, displayed at 300 x 200 points. Adding a 30-point label might look good on the screen, but then grabbing that as a UIImage (either for zooming or for saving to disk), ends up as a 600 x 400 pixel image which, of course, will not look good at a larger size.
Frequently, the approach to resolve this is along these lines:
Allow the user to edit at screen dimensions, e.g.
show a 3000 x 2000 pixel image scaled down in a 300 x 200 view
add a Bezier Path, oval-in-rect (20, 20, 200, 200)
add a 30-point label at origin (32, 32)
Then, when "capturing" that for output / zooming
take the original 3000 x 2000 pixel image
add a Bezier Path, oval-in-rect (20 * 10, 20 * 10, 200 * 10, 200 * 10)
add a (30 * 10)-point label at origin (32 * 10, 32 * 10)
Another option is to do the on-screen editing scaled-down.
So, you might use a 300 x 200 image view, with your 3000 x 2000 pixel image (scale to fit). When the user says "I want to add an oval Bezier Path in rect (20, 20, 200, 200), your code would draw that oval at rect (20 * 10, 20 * 10, 200 * 10, 200 * 10) on the image itself and then refresh the .image property of the image view.
Here's a little more detailed example to help make things clear:
class ViewController: UIViewController, UIScrollViewDelegate {
let topView: UIView = UIView()
let topLabel: UILabel = UILabel()
let botView: UIView = UIView()
let botLabel: UILabel = UILabel()
let topScrollView: UIScrollView = UIScrollView()
let botScrollView: UIScrollView = UIScrollView()
let topStatLabel: UILabel = UILabel()
let botStatLabel: UILabel = UILabel()
override func viewDidLoad() {
super.viewDidLoad()
[topView, topLabel, botView, botLabel, topScrollView, botScrollView, topStatLabel, botStatLabel].forEach {
$0.translatesAutoresizingMaskIntoConstraints = false
}
topView.addSubview(topLabel)
botView.addSubview(botLabel)
topScrollView.addSubview(topView)
botScrollView.addSubview(botView)
view.addSubview(topStatLabel)
view.addSubview(botStatLabel)
view.addSubview(topScrollView)
view.addSubview(botScrollView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
topStatLabel.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
topStatLabel.leadingAnchor.constraint(equalTo: topScrollView.leadingAnchor),
topScrollView.topAnchor.constraint(equalTo: topStatLabel.bottomAnchor, constant: 4.0),
topScrollView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
topScrollView.widthAnchor.constraint(equalToConstant: 300.0),
topScrollView.heightAnchor.constraint(equalToConstant: 200.0),
botScrollView.topAnchor.constraint(equalTo: topScrollView.bottomAnchor, constant: 12.0),
botScrollView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
botScrollView.widthAnchor.constraint(equalToConstant: 300.0),
botScrollView.heightAnchor.constraint(equalToConstant: 200.0),
botStatLabel.topAnchor.constraint(equalTo: botScrollView.bottomAnchor, constant: 4.0),
botStatLabel.leadingAnchor.constraint(equalTo: botScrollView.leadingAnchor),
topView.widthAnchor.constraint(equalToConstant: 300.0),
topView.heightAnchor.constraint(equalToConstant: 200.0),
botView.widthAnchor.constraint(equalToConstant: 300.0 * 10.0),
botView.heightAnchor.constraint(equalToConstant: 200.0 * 10.0),
topLabel.topAnchor.constraint(equalTo: topView.topAnchor, constant: 8.0),
topLabel.leadingAnchor.constraint(equalTo: topView.leadingAnchor, constant: 8.0),
botLabel.topAnchor.constraint(equalTo: botView.topAnchor, constant: 8.0 * 10.0),
botLabel.leadingAnchor.constraint(equalTo: botView.leadingAnchor, constant: 8.0 * 10.0),
topView.topAnchor.constraint(equalTo: topScrollView.contentLayoutGuide.topAnchor),
topView.leadingAnchor.constraint(equalTo: topScrollView.contentLayoutGuide.leadingAnchor),
topView.trailingAnchor.constraint(equalTo: topScrollView.contentLayoutGuide.trailingAnchor),
topView.bottomAnchor.constraint(equalTo: topScrollView.contentLayoutGuide.bottomAnchor),
botView.topAnchor.constraint(equalTo: botScrollView.contentLayoutGuide.topAnchor),
botView.leadingAnchor.constraint(equalTo: botScrollView.contentLayoutGuide.leadingAnchor),
botView.trailingAnchor.constraint(equalTo: botScrollView.contentLayoutGuide.trailingAnchor),
botView.bottomAnchor.constraint(equalTo: botScrollView.contentLayoutGuide.bottomAnchor),
])
topLabel.textColor = .red
topLabel.backgroundColor = .yellow
topLabel.font = UIFont.systemFont(ofSize: 30.0, weight: .regular)
topLabel.text = "Sample Text"
botLabel.textColor = .red
botLabel.backgroundColor = .yellow
botLabel.font = UIFont.systemFont(ofSize: 30.0 * 10.0, weight: .regular)
botLabel.text = "Sample Text"
topScrollView.delegate = self
topScrollView.minimumZoomScale = 1
topScrollView.maximumZoomScale = 10
botScrollView.delegate = self
botScrollView.minimumZoomScale = 0.1
botScrollView.maximumZoomScale = 1
topScrollView.zoomScale = topScrollView.minimumZoomScale
botScrollView.zoomScale = botScrollView.minimumZoomScale
view.backgroundColor = UIColor(white: 0.9, alpha: 1.0)
topScrollView.backgroundColor = .white
botScrollView.backgroundColor = .white
topStatLabel.font = UIFont.systemFont(ofSize: 14, weight: .light)
topStatLabel.numberOfLines = 0
botStatLabel.font = UIFont.systemFont(ofSize: 14, weight: .light)
botStatLabel.numberOfLines = 0
let t = UITapGestureRecognizer(target: self, action: #selector(self.tapped(_:)))
view.addGestureRecognizer(t)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
updateStatLabels()
}
func updateStatLabels() -> Void {
var sTop = ""
sTop += "Label Point Size: \(topLabel.font.pointSize)"
sTop += "\n"
sTop += "Label Frame: \(topLabel.frame)"
sTop += "\n"
sTop += "View Size: \(topView.bounds.size)"
sTop += "\n"
sTop += "Zoom Scale: \(String(format: "%0.1f", topScrollView.zoomScale))"
var sBot = ""
sBot += "Zoom Scale: \(String(format: "%0.1f", botScrollView.zoomScale))"
sBot += "\n"
sBot += "View Size: \(botView.bounds.size)"
sBot += "\n"
sBot += "Label Frame: \(botLabel.frame)"
sBot += "\n"
sBot += "Label Point Size: \(botLabel.font.pointSize)"
topStatLabel.text = sTop
botStatLabel.text = sBot
}
func viewForZooming(in scrollView: UIScrollView) -> UIView? {
if scrollView == topScrollView {
return topView
}
return botView
}
#objc func tapped(_ g: UITapGestureRecognizer) -> Void {
if Int(topScrollView.zoomScale) == Int(topScrollView.maximumZoomScale) {
topScrollView.zoomScale = topScrollView.minimumZoomScale
} else {
topScrollView.zoomScale += 1
}
topScrollView.contentOffset = .zero
// comparing floating point directly will fail, so round the values
if round(botScrollView.zoomScale * 10) == round(botScrollView.maximumZoomScale * 10) {
botScrollView.zoomScale = botScrollView.minimumZoomScale
} else {
botScrollView.zoomScale += 0.1
}
botScrollView.contentOffset = .zero
updateStatLabels()
}
}
The top scroll view has a 300 x 200 view with a 30-point label, allowing zoomScale from 1 to 10.
The bottom scroll view has a 3000 x 2000 view with a 300-point label, allowing zoomScale from 0.1 to 1.0.
Each time you tap the screen, the scrollViews increase zoomScale by 1 and 0.1 respectively.
And it looks like this at min-scale:
at 5 and 0.5 scale:
and at 10 and 1.0 scale:
I am using this code in one of my apps and seems to work fine. Don't know if its quality is enough for you.
import UIKit
extension UIApplication {
var screenShot: UIImage? {
if let layer = keyWindow?.layer {
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(layer.frame.size, false, scale);
if let context = UIGraphicsGetCurrentContext() {
layer.render(in: context)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return screenshot
}
}
return nil
}
}
I use this extension to create an image from the view UIGraphicsGetCurrentContext() returns a reference to the current graphics context. It will not create one. It is important to remember this, because if you look at it this way, you will find that it does not need the size parameter, because the current context is just the size used when creating the graphics context.
extension UIView {
func toImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(bounds.size, false, UIScreen.main.scale)
drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
Related
I'm trying to recreate Apple's festival lights image in SwiftUI (screenshot from Apple India's website). Expected result:
Here's what I've managed to achieve so far:
MY UNDERSTANDING SO FAR: Images are not Shapes, so we can't stroke their borders, but I also found that shadow() modifier places shadows on image borders just fine. So, I need a way to customize the shadow somehow and understand how it works.
WHAT I'VE TRIED SO FAR: Besides the code above, I tried to unsuccessfully convert a given SF Symbol to a Shape using Vision framework's contour detection, based on my understanding of this article: https://www.iosdevie.com/p/new-in-ios-14-vision-contour-detection
Can someone please guide me on how I would go about doing this, preferably using SF symbols only.
Looks like the Vision contour detection isn't a bad approach after all. I was just missing a few things, as helpfully pointed out by #DonMag. Here's my final answer using SwiftUI, in case someone's interested.
First, we create an InsettableShape:
struct MKSymbolShape: InsettableShape {
var insetAmount = 0.0
let systemName: String
var trimmedImage: UIImage {
let cfg = UIImage.SymbolConfiguration(pointSize: 256.0)
// get the symbol
guard let imgA = UIImage(systemName: systemName, withConfiguration: cfg)?.withTintColor(.black, renderingMode: .alwaysOriginal) else {
fatalError("Could not load SF Symbol: \(systemName)!")
}
// we want to "strip" the bounding box empty space
// get a cgRef from imgA
guard let cgRef = imgA.cgImage else {
fatalError("Could not get cgImage!")
}
// create imgB from the cgRef
let imgB = UIImage(cgImage: cgRef, scale: imgA.scale, orientation: imgA.imageOrientation)
.withTintColor(.black, renderingMode: .alwaysOriginal)
// now render it on a white background
let resultImage = UIGraphicsImageRenderer(size: imgB.size).image { ctx in
UIColor.white.setFill()
ctx.fill(CGRect(origin: .zero, size: imgB.size))
imgB.draw(at: .zero)
}
return resultImage
}
func path(in rect: CGRect) -> Path {
// cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
// so we want to scale the path to our view bounds
let inputImage = self.trimmedImage
guard let cgPath = detectVisionContours(from: inputImage) else { return Path() }
let scW: CGFloat = (rect.width - CGFloat(insetAmount)) / cgPath.boundingBox.width
let scH: CGFloat = (rect.height - CGFloat(insetAmount)) / cgPath.boundingBox.height
// we need to invert the Y-coordinate space
var transform = CGAffineTransform.identity
.scaledBy(x: scW, y: -scH)
.translatedBy(x: 0.0, y: -cgPath.boundingBox.height)
if let imagePath = cgPath.copy(using: &transform) {
return Path(imagePath)
} else {
return Path()
}
}
func inset(by amount: CGFloat) -> some InsettableShape {
var shape = self
shape.insetAmount += amount
return shape
}
func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
let contourRequest = VNDetectContoursRequest()
contourRequest.revision = VNDetectContourRequestRevision1
contourRequest.contrastAdjustment = 1.0
contourRequest.maximumImageDimension = 512
let requestHandler = VNImageRequestHandler(ciImage: inputImage, options: [:])
try! requestHandler.perform([contourRequest])
if let contoursObservation = contourRequest.results?.first {
return contoursObservation.normalizedPath
}
return nil
}
}
Then we create our main view:
struct PreviewView: View {
var body: some View {
ZStack {
LinearGradient(colors: [.black, .purple], startPoint: .top, endPoint: .bottom)
.edgesIgnoringSafeArea(.all)
MKSymbolShape(systemName: "applelogo")
.stroke(LinearGradient(colors: [.yellow, .orange, .pink, .red], startPoint: .top, endPoint: .bottom), style: StrokeStyle(lineWidth: 8, lineCap: .round, dash: [2.0, 21.0]))
.aspectRatio(CGSize(width: 30, height: 36), contentMode: .fit)
.padding()
}
}
}
Final look:
We can use the Vision framework with VNDetectContourRequestRevision1 to get a cgPath:
func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
let contourRequest = VNDetectContoursRequest.init()
contourRequest.revision = VNDetectContourRequestRevision1
contourRequest.contrastAdjustment = 1.0
contourRequest.maximumImageDimension = 512
let requestHandler = VNImageRequestHandler.init(ciImage: inputImage, options: [:])
try! requestHandler.perform([contourRequest])
if let contoursObservation = contourRequest.results?.first {
return contoursObservation.normalizedPath
}
return nil
}
The path will be based on a 0,0 1.0,1.0 coordinate space, so to use it we need to scale the path to our desired size. It also uses inverted Y-coordinates, so we'll need to flip it also:
// cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
// so we want to scale the path to our view bounds
let scW: CGFloat = targetRect.bounds.width / cgPth.boundingBox.width
let scH: CGFloat = targetRect.bounds.height / cgPth.boundingBox.height
// we need to invert the Y-coordinate space
var transform = CGAffineTransform.identity
.scaledBy(x: scW, y: -scH)
.translatedBy(x: 0.0, y: -cgPth.boundingBox.height)
return cgPth.copy(using: &transform)
Couple notes...
When using UIImage(systemName: "applelogo"), we get an image with "font" characteristics - namely, empty space. See this https://stackoverflow.com/a/71743787/6257435 and this https://stackoverflow.com/a/66293917/6257435 for some discussion.
So, we could use it directly, but it makes the path scaling and translation a bit complex.
So, instead of this "default":
we can use a little code to "trim" the space for a more usable image:
Then we can use the path from Vision as the path of a CAShapeLayer, along with these layer properties: .lineCap = .round / .lineWidth = 8 / .lineDashPattern = [2.0, 20.0] (for example) to get a "dotted line" stroke:
Then we can use that same path on a shape layer as a mask on a gradient layer:
and finally remove the image view so we see only the view with the masked gradient layer:
Here's example code to produce that:
import UIKit
import Vision
class ViewController: UIViewController {
let myOutlineView = UIView()
let myGradientView = UIView()
let shapeLayer = CAShapeLayer()
let gradientLayer = CAGradientLayer()
let defaultImageView = UIImageView()
let trimmedImageView = UIImageView()
var defaultImage: UIImage!
var trimmedImage: UIImage!
var visionPath: CGPath!
// an information label
let infoLabel: UILabel = {
let v = UILabel()
v.backgroundColor = UIColor(white: 0.95, alpha: 1.0)
v.textAlignment = .center
v.numberOfLines = 0
return v
}()
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemBlue
// get the system image at 240-points (so we can get a good path from Vision)
// experiment with different sizes if the path doesn't appear smooth
let cfg = UIImage.SymbolConfiguration(pointSize: 240.0)
// get "applelogo" symbol
guard let imgA = UIImage(systemName: "applelogo", withConfiguration: cfg)?.withTintColor(.black, renderingMode: .alwaysOriginal) else {
fatalError("Could not load SF Symbol: applelogo!")
}
// now render it on a white background
self.defaultImage = UIGraphicsImageRenderer(size: imgA.size).image { ctx in
UIColor.white.setFill()
ctx.fill(CGRect(origin: .zero, size: imgA.size))
imgA.draw(at: .zero)
}
// we want to "strip" the bounding box empty space
// get a cgRef from imgA
guard let cgRef = imgA.cgImage else {
fatalError("Could not get cgImage!")
}
// create imgB from the cgRef
let imgB = UIImage(cgImage: cgRef, scale: imgA.scale, orientation: imgA.imageOrientation)
.withTintColor(.black, renderingMode: .alwaysOriginal)
// now render it on a white background
self.trimmedImage = UIGraphicsImageRenderer(size: imgB.size).image { ctx in
UIColor.white.setFill()
ctx.fill(CGRect(origin: .zero, size: imgB.size))
imgB.draw(at: .zero)
}
defaultImageView.image = defaultImage
defaultImageView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(defaultImageView)
trimmedImageView.image = trimmedImage
trimmedImageView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(trimmedImageView)
myOutlineView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(myOutlineView)
myGradientView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(myGradientView)
// next step button
let btn = UIButton()
btn.setTitle("Next Step", for: [])
btn.setTitleColor(.white, for: .normal)
btn.setTitleColor(.lightGray, for: .highlighted)
btn.backgroundColor = .systemRed
btn.layer.cornerRadius = 8
btn.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(btn)
infoLabel.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(infoLabel)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
// inset default image view 20-points on each side
// height proportional to the image
// near the top
defaultImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
defaultImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
defaultImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
defaultImageView.heightAnchor.constraint(equalTo: defaultImageView.widthAnchor, multiplier: defaultImage.size.height / defaultImage.size.width),
// inset trimmed image view 40-points on each side
// height proportional to the image
// centered vertically
trimmedImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 40.0),
trimmedImageView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 40.0),
trimmedImageView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -40.0),
trimmedImageView.heightAnchor.constraint(equalTo: trimmedImageView.widthAnchor, multiplier: self.trimmedImage.size.height / self.trimmedImage.size.width),
// add outline view on top of trimmed image view
myOutlineView.topAnchor.constraint(equalTo: trimmedImageView.topAnchor, constant: 0.0),
myOutlineView.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
myOutlineView.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
myOutlineView.bottomAnchor.constraint(equalTo: trimmedImageView.bottomAnchor, constant: 0.0),
// add gradient view on top of trimmed image view
myGradientView.topAnchor.constraint(equalTo: trimmedImageView.topAnchor, constant: 0.0),
myGradientView.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
myGradientView.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
myGradientView.bottomAnchor.constraint(equalTo: trimmedImageView.bottomAnchor, constant: 0.0),
// button and info label below
btn.topAnchor.constraint(equalTo: defaultImageView.bottomAnchor, constant: 20.0),
btn.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
btn.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
infoLabel.topAnchor.constraint(equalTo: btn.bottomAnchor, constant: 20.0),
infoLabel.leadingAnchor.constraint(equalTo: trimmedImageView.leadingAnchor, constant: 0.0),
infoLabel.trailingAnchor.constraint(equalTo: trimmedImageView.trailingAnchor, constant: 0.0),
infoLabel.heightAnchor.constraint(greaterThanOrEqualToConstant: 60.0),
])
// setup the shape layer
shapeLayer.strokeColor = UIColor.red.cgColor
shapeLayer.fillColor = UIColor.clear.cgColor
// this will give use round dots for the shape layer's stroke
shapeLayer.lineCap = .round
shapeLayer.lineWidth = 8
shapeLayer.lineDashPattern = [2.0, 20.0]
// setup the gradient layer
let c1: UIColor = .init(red: 0.95, green: 0.73, blue: 0.32, alpha: 1.0)
let c2: UIColor = .init(red: 0.95, green: 0.25, blue: 0.45, alpha: 1.0)
gradientLayer.colors = [c1.cgColor, c2.cgColor]
myOutlineView.layer.addSublayer(shapeLayer)
myGradientView.layer.addSublayer(gradientLayer)
btn.addTarget(self, action: #selector(nextStep), for: .touchUpInside)
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let pth = pathSetup()
else {
fatalError("Vision could not create path")
}
self.visionPath = pth
shapeLayer.path = pth
gradientLayer.frame = myGradientView.bounds.insetBy(dx: -8.0, dy: -8.0)
let gradMask = CAShapeLayer()
gradMask.strokeColor = UIColor.red.cgColor
gradMask.fillColor = UIColor.clear.cgColor
gradMask.lineCap = .round
gradMask.lineWidth = 8
gradMask.lineDashPattern = [2.0, 20.0]
gradMask.path = pth
gradMask.position.x += 8.0
gradMask.position.y += 8.0
gradientLayer.mask = gradMask
nextStep()
}
var idx: Int = -1
#objc func nextStep() {
idx += 1
switch idx % 5 {
case 1:
defaultImageView.isHidden = true
trimmedImageView.isHidden = false
infoLabel.text = "\"applelogo\" system image - with trimmed empty-space bounding-box."
case 2:
myOutlineView.isHidden = false
shapeLayer.opacity = 1.0
infoLabel.text = "Dotted outline shape using Vision detected path."
case 3:
myOutlineView.isHidden = true
myGradientView.isHidden = false
infoLabel.text = "Use Dotted outline shape as a gradient layer mask."
case 4:
trimmedImageView.isHidden = true
view.backgroundColor = .black
infoLabel.text = "View by itself with Dotted outline shape as a gradient layer mask."
default:
view.backgroundColor = .systemBlue
defaultImageView.isHidden = false
trimmedImageView.isHidden = true
myOutlineView.isHidden = true
myGradientView.isHidden = true
shapeLayer.opacity = 0.0
infoLabel.text = "Default \"applelogo\" system image - note empty-space bounding-box."
}
}
func pathSetup() -> CGPath? {
// get the cgPath from the image
guard let cgPth = detectVisionContours(from: self.trimmedImage)
else {
print("Failed to get path!")
return nil
}
// cgPath returned from Vision will be in rect 0,0 1.0,1.0 coordinates
// so we want to scale the path to our view bounds
let scW: CGFloat = myOutlineView.bounds.width / cgPth.boundingBox.width
let scH: CGFloat = myOutlineView.bounds.height / cgPth.boundingBox.height
// we need to invert the Y-coordinate space
var transform = CGAffineTransform.identity
.scaledBy(x: scW, y: -scH)
.translatedBy(x: 0.0, y: -cgPth.boundingBox.height)
return cgPth.copy(using: &transform)
}
func detectVisionContours(from sourceImage: UIImage) -> CGPath? {
let inputImage = CIImage.init(cgImage: sourceImage.cgImage!)
let contourRequest = VNDetectContoursRequest.init()
contourRequest.revision = VNDetectContourRequestRevision1
contourRequest.contrastAdjustment = 1.0
contourRequest.maximumImageDimension = 512
let requestHandler = VNImageRequestHandler.init(ciImage: inputImage, options: [:])
try! requestHandler.perform([contourRequest])
if let contoursObservation = contourRequest.results?.first {
return contoursObservation.normalizedPath
}
return nil
}
}
I have a UIView with a UIImageView as subview added, the UIImageView is a texture that repeats. The UIView width and height are correct, but the image is out of the size. I added the ClipsToBounds, but it's not clipping the image at all. Is there a specific order or what am I doing wrong the image is not clipped inside it's parent view?
let rectangleView = UIView(frame: CGRect(x: x, y: y, width: width, height: height))
rectangleView.isUserInteractionEnabled = false
if let texturesUrl = layout.Url, let url = texturesUrl.isValidURL() ? URL(string: texturesUrl) : URL(string: String(format: AppManager.shared.baseTexturesUrl, texturesUrl)) {
let widthLimit = scale * CGFloat(layout.Width ?? 0)
let heightLimit = scale * CGFloat(layout.Height ?? 0)
let widthStep = scale * CGFloat(layout.TileWidth ?? layout.Width ?? 0)
let heightStep = scale * CGFloat(layout.TileHeight ?? layout.Height ?? 0)
var locY = CGFloat(0)
let size = CGSize(width: widthStep, height: heightStep)
if widthLimit > 0, heightLimit > 0 {
while locY < heightLimit {
var locX = CGFloat(0)
while locX < widthLimit {
let imageView = UIImageView()
rectangleView.addSubview(imageView)
imageView.contentMode = .scaleAspectFill
imageView.translatesAutoresizingMaskIntoConstraints = false
imageView.clipsToBounds = true
imageView.isUserInteractionEnabled = false
imageView.anchor(top: rectangleView.topAnchor, leading: rectangleView.leadingAnchor, bottom: nil, trailing: nil, padding: UIEdgeInsets(top: locY, left: locX, bottom: 0, right: 0), size: size)
imageView.setImage(with: url, size: size)
locX += widthStep
}
locY += heightStep
}
}
}
You don't need to add so many image views, just use it as a repeating background:
rectangleView.backgroundColor = UIColor(patternImage: myImage)
See documentation for UIColor(patternImage:).
You can do this much more efficiently with CAReplicatorLayer.
Here's a quick example:
class TileExampleViewController: UIViewController {
let tiledView = UIView()
override func viewDidLoad() {
super.viewDidLoad()
tiledView.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(tiledView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
tiledView.topAnchor.constraint(equalTo: g.topAnchor, constant: 20.0),
tiledView.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 20.0),
tiledView.trailingAnchor.constraint(equalTo: g.trailingAnchor, constant: -20.0),
tiledView.bottomAnchor.constraint(equalTo: g.bottomAnchor, constant: -20.0),
])
}
override func viewDidLayoutSubviews() {
// we want to do this here, when we know the
// size / frame of the tiledView
// make sure we can load the image
guard let tileImage = UIImage(named: "tileSquare") else { return }
// let's just pick 80 x 80 for the tile size
let tileSize: CGSize = CGSize(width: 80.0, height: 80.0)
// create a "horizontal" replicator layer
let hReplicatorLayer = CAReplicatorLayer()
hReplicatorLayer.frame.size = tiledView.frame.size
hReplicatorLayer.masksToBounds = true
// create a "vertical" replicator layer
let vReplicatorLayer = CAReplicatorLayer()
vReplicatorLayer.frame.size = tiledView.frame.size
vReplicatorLayer.masksToBounds = true
// create a layer to hold the image
let imageLayer = CALayer()
imageLayer.contents = tileImage.cgImage
imageLayer.frame.size = tileSize
// add the imageLayer to the horizontal replicator layer
hReplicatorLayer.addSublayer(imageLayer)
// add the horizontal replicator layer to the vertical replicator layer
vReplicatorLayer.addSublayer(hReplicatorLayer)
// how many "tiles" do we need to fill the width
let hCount = tiledView.frame.width / tileSize.width
hReplicatorLayer.instanceCount = Int(ceil(hCount))
// Shift each image instance right by tileSize width
hReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(
tileSize.width, 0, 0
)
// how many "rows" do we need to fill the height
let vCount = tiledView.frame.height / tileSize.height
vReplicatorLayer.instanceCount = Int(ceil(vCount))
// shift each "row" down by tileSize height
vReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(
0, tileSize.height, 0
)
// add the vertical replicator layer as a sublayer
tiledView.layer.addSublayer(vReplicatorLayer)
}
}
I used this tile image:
and we get this result with let tileSize: CGSize = CGSize(width: 80.0, height: 80.0):
with let tileSize: CGSize = CGSize(width: 120.0, height: 160.0):
with let tileSize: CGSize = CGSize(width: 40.0, height: 40.0):
I have two UILabels. A bigger on the left side of the screen and a smaller on the right side.
I'm trying to use CGAffineTransform to animate moving the smaller label into the place of the bigger one and scale it to the same size, and move the bigger one out of the screen.
I'm not actually moving the labels, after the animation is complete, I change the text property on the labels and I set their transforms to identity.
My problem is that I don't know how to calculate the exact x and y values that I have to translate my smaller label. I think the values I have are not accurate because I scale the label the same time I translate it, and the tx and ty values are calculated with the non-scaled size of the smaller label.
What I do currently:
tx: the width of the bigger label + the distance between it and the smaller label, ty: the distance between the centres of the two labels on the y axis
There are various ways to do this - here's one...
Start by calculating the translation values and the scale values, the concatenate them:
let translation = CGAffineTransform(translationX: xMove, y: yMove)
let scaling = CGAffineTransform(scaleX: xScale, y: yScale)
let fullTransform = scaling.concatenating(translation)
Here's a complete example... we add two labels with different font sizes, locations and background colors (to make it easy to see). Tap anywhere to run the transform animation:
class ViewController: UIViewController {
let labelA = UILabel()
let labelB = UILabel()
var aTop: NSLayoutConstraint!
var aLeading: NSLayoutConstraint!
var bTop: NSLayoutConstraint!
var bLeading: NSLayoutConstraint!
override func viewDidLoad() {
super.viewDidLoad()
[labelA, labelB].forEach { v in
v.translatesAutoresizingMaskIntoConstraints = false
view.addSubview(v)
}
labelA.text = "Label A"
labelB.text = "Label B"
labelA.backgroundColor = .green
labelB.backgroundColor = .cyan
labelA.font = .systemFont(ofSize: 40.0)
labelB.font = .systemFont(ofSize: 20.0)
// respect safe area
let g = view.safeAreaLayoutGuide
aTop = labelA.topAnchor.constraint(equalTo: g.topAnchor, constant: 100.0)
aLeading = labelA.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 40.0)
bTop = labelB.topAnchor.constraint(equalTo: g.topAnchor, constant: 300.0)
bLeading = labelB.leadingAnchor.constraint(equalTo: g.leadingAnchor, constant: 240.0)
NSLayoutConstraint.activate([
aTop, aLeading,
bTop, bLeading,
])
let t = UITapGestureRecognizer(target: self, action: #selector(self.doAnim(_:)))
view.addGestureRecognizer(t)
}
#objc func doAnim(_ g: UITapGestureRecognizer?) -> Void {
let targetPoint = labelA.center
let originPoint = labelB.center
let xMove = targetPoint.x - originPoint.x
let yMove = targetPoint.y - originPoint.y
let xScale = labelA.frame.width / labelB.frame.width
let yScale = labelA.frame.height / labelB.frame.height
let translation = CGAffineTransform(translationX: xMove, y: yMove)
let scaling = CGAffineTransform(scaleX: xScale, y: yScale)
let fullTransform = scaling.concatenating(translation)
UIView.animate(withDuration: 1.0, animations: {
self.labelB.transform = fullTransform
}) { [weak self] b in
guard let self = self else { return }
self.labelB.transform = .identity
self.labelB.font = self.labelA.font
self.bTop.constant = self.aTop.constant
self.bLeading.constant = self.aLeading.constant
}
}
}
I want to display a semi transparent text field over the uiimageview. I can’t choose static color for textfield because some images are bright while other images are dark. I want to automatically adapt text field color based on the colors behind it. Is there an easy solution for that?
UPD:
The effect I want to achieve is:
If UIImage is dark where my textfield should be placed, set textfield background color to white with 0.5 opacity.
If UIImage is light where my textfield should be placed, set textfield background color to black with 0.5 opacity.
So I want to somehow calculate the average color of the uiimageview in place where I want to put my textfield and then check if it is light or dark. I don't know, how to get the screenshot of the particular part my uiimageview and get it's average color. I want it to be optimised. I guess working with UIGraphicsImageRenderer isn't a good choice, that's why I ask this question. I know how to do it with UIGraphicsImageRenderer, but I don't think that my way is good enough.
"Brightness" of an image is not a straight-forward thing. You may or may not find this suitable.
If you search for determine brightness of an image you'll find plenty of documentation on it - likely way more than you want.
One common way to calculate the "brightness" of a pixel is to use the sum of:
red component * 0.299
green component * 0.587
blue component * 0.114
This is because we perceive the different colors at different "brightness" levels.
So, you'll want to loop through each pixel in the area of the image where you want to place your label (or textField), get the average brightness, and then decide what's "dark" and what's "light".
As an example, using this background image:
I generated a 5 x 8 grid of labels, looped through getting the "brightness" of the image in the rect under each label's frame, and then set the background and text color based on the brightness calculation (values range from 0 to 255, so I used < 127 is dark, >= 127 is light):
This is the code I used:
extension CGImage {
var brightness: Double {
get {
// common formula to get "average brightness"
let bytesPerPixel = self.bitsPerPixel / self.bitsPerComponent
let imageData = self.dataProvider?.data
let ptr = CFDataGetBytePtr(imageData)
var x = 0
var p = 0
var result: Double = 0
for _ in 0..<self.height {
for _ in 0..<self.width {
let r = ptr![p+0]
let g = ptr![p+1]
let b = ptr![p+2]
result += (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
p += bytesPerPixel
x += 1
}
}
let bright = result / Double (x)
return bright
}
}
}
extension UIImage {
// get the "brightness" of self (entire image)
var brightness: Double {
get {
return (self.cgImage?.brightness)!
}
}
// get the "brightness" in a sub-rect of self
func brightnessIn(_ rect: CGRect) -> Double {
guard let cgImage = self.cgImage else { return 0.0 }
guard let croppedCGImage = cgImage.cropping(to: rect) else { return 0.0 }
return croppedCGImage.brightness
}
}
class ImageBrightnessViewController: UIViewController {
let imgView: UIImageView = {
let v = UIImageView()
v.contentMode = .center
v.backgroundColor = .green
v.clipsToBounds = true
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
var labels: [UILabel] = []
override func viewDidLoad() {
super.viewDidLoad()
// load an image
guard let img = UIImage(named: "bkg640x360") else { return }
imgView.image = img
let w = img.size.width
let h = img.size.height
// set image view's width and height equal to img width and height
view.addSubview(imgView)
let g = view.safeAreaLayoutGuide
NSLayoutConstraint.activate([
imgView.widthAnchor.constraint(equalToConstant: w),
imgView.heightAnchor.constraint(equalToConstant: h),
imgView.centerXAnchor.constraint(equalTo: g.centerXAnchor),
imgView.centerYAnchor.constraint(equalTo: g.centerYAnchor),
])
// use stack views to create a 5 x 8 grid of labels
let outerStackView: UIStackView = {
let v = UIStackView()
v.translatesAutoresizingMaskIntoConstraints = false
v.axis = .horizontal
v.spacing = 32
v.distribution = .fillEqually
return v
}()
for _ in 1...5 {
let vStack = UIStackView()
vStack.axis = .vertical
vStack.spacing = 12
vStack.distribution = .fillEqually
for _ in 1...8 {
let label = UILabel()
label.textAlignment = .center
vStack.addArrangedSubview(label)
labels.append(label)
}
outerStackView.addArrangedSubview(vStack)
}
let padding: CGFloat = 12.0
imgView.addSubview(outerStackView)
NSLayoutConstraint.activate([
outerStackView.topAnchor.constraint(equalTo: imgView.topAnchor, constant: padding),
outerStackView.leadingAnchor.constraint(equalTo: imgView.leadingAnchor, constant: padding),
outerStackView.trailingAnchor.constraint(equalTo: imgView.trailingAnchor, constant: -padding),
outerStackView.bottomAnchor.constraint(equalTo: imgView.bottomAnchor, constant: -padding),
])
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let img = imgView.image else {
return
}
labels.forEach { v in
if let sv = v.superview {
// convert label frame to imgView coordinate space
let rect = sv.convert(v.frame, to: imgView)
// get the "brightness" of that rect from the image
// it will be in the range of 0 - 255
let d = img.brightnessIn(rect)
// set the text of the label to that value
v.text = String(format: "%.2f", d)
// just using 50% here... adjust as desired
if d > 127.0 {
// if brightness is than 50%
// black translucent background with white text
v.backgroundColor = UIColor.black.withAlphaComponent(0.5)
v.textColor = .white
} else {
// if brightness is greater than or equal to 50%
// white translucent background with black text
v.backgroundColor = UIColor.white.withAlphaComponent(0.5)
v.textColor = .black
}
}
}
}
}
As you can see, when getting the average of a region of a photo, you'll quite often not be completely happy with the result. That's why it's much more common to see one or the other, with a contrasting order and either a shadow or glow around the frame.
I am trying image crop tutorial
I want to make app that crop image and add sticker.
subviews are imageview’s subview ( imageview.addsubview(subview) )
please check this Image link
Everything is fine, but I can’t make final image.
I want to make final image using original resolution but my fuction’s result is too low resolution
I lost 5 days about this
example)
scrollview’s rect 0, 0, 200, 400
imageview size 7680 x 4320
I tried below code.
but result quality is very low. because the resolution is 200x400(scrollview’s rect) .
func screenshot() -> UIImage {
guard let imageview = imageview else { return UIImage() }
UIGraphicsBeginImageContextWithOptions(self.scrollView.bounds.size, false, UIScreen.main.scale)
let offset = self.scrollView.contentOffset
guard let thisContext = UIGraphicsGetCurrentContext() else { return UIImage() }
thisContext.translateBy(x: -offset.x, y: -offset.y)
self.scrollView.layer.render(in: thisContext)
let visibleScrollViewImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return visibleScrollViewImage ?? UIImage()
}
please let me know how to high quality resolution final image including sticker subviews
how to get one high quality image using
Using that method, you get an image of what is rendered on the screen.
What you want to do is calculate the "sticker" size and placement relative to the scaled image and then combine the images.
You can use this extension to overlay one image on another:
extension UIImage {
func overlayWith(image: UIImage, posX: CGFloat, posY: CGFloat) -> UIImage {
let rFormat = UIGraphicsImageRendererFormat()
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
let newImage = renderer.image {
(context) in
self.draw(at: .zero)
image.draw(at: CGPoint(x: posX, y: posY))
}
return newImage
}
}
Use it by calling (for example):
let combinedImage = bkgImage.overlayWith(image: stickerImage, posX: 1000.0, posY: 1000.0)
That says "Create a new UIImage by overlaying the sticker image onto the background image at x: 1000, y:1000"
Remember that you'll need to translate your coordinates... So, if you are showing your 7680 x 4320 image in a 200 x 400 scrollview (not taking any zooming into account here), the on-screen image size will be 711 x 400. If the user places the sticker at 100, 50, the actual position on the original size image will be:
let scaleFactor = bkgImage.size.height / 400.0
let x = 100.0 * scaleFactor
let y = 50.0 * scaleFactor
// scaleFactor equals 10.8
// x equals 100 * 10.8 == 1080
// y equals 50 * 10.8 == 540
let combinedImage = bkgImage.overlayWith(image: stickerImage, posX: x, posY: y)
Here is a basic sample you can try out.
It starts with a background image of 5120 x 2880:
and a "sticker" image at 512 x 512:
And the result, with the sticker placed at x: 1000, y:1000. Top image is original background (aspectFit), middle image is the combined image (again, aspectFit), and the bottom image is the actual size combined image in a scroll view:
Use this source code to run that example (all code, no #IBoutlet):
import UIKit
extension UIImage {
func overlayWith(image: UIImage, posX: CGFloat, posY: CGFloat) -> UIImage {
let rFormat = UIGraphicsImageRendererFormat()
let renderer = UIGraphicsImageRenderer(size: size, format: rFormat)
let newImage = renderer.image {
(context) in
self.draw(at: .zero)
image.draw(at: CGPoint(x: posX, y: posY))
}
return newImage
}
}
class ViewController: UIViewController {
let origImageView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .scaleAspectFit
v.backgroundColor = .yellow
return v
}()
let modImageView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .scaleAspectFit
v.backgroundColor = .yellow
return v
}()
let actualSizeImageView: UIImageView = {
let v = UIImageView()
v.translatesAutoresizingMaskIntoConstraints = false
v.contentMode = .topLeft
v.backgroundColor = .yellow
return v
}()
let scrollView: UIScrollView = {
let v = UIScrollView()
v.translatesAutoresizingMaskIntoConstraints = false
return v
}()
override func viewDidLoad() {
super.viewDidLoad()
guard let bkgImage = UIImage(named: "background"),
let stickerImage = UIImage(named: "sticker") else {
fatalError("missing images")
}
view.backgroundColor = .systemGreen
view.addSubview(origImageView)
view.addSubview(modImageView)
view.addSubview(scrollView)
scrollView.addSubview(actualSizeImageView)
let g = view.safeAreaLayoutGuide
let sg = scrollView.contentLayoutGuide
NSLayoutConstraint.activate([
origImageView.topAnchor.constraint(equalTo: g.topAnchor, constant: 10.0),
origImageView.centerXAnchor.constraint(equalTo: g.centerXAnchor, constant: 0.0),
origImageView.widthAnchor.constraint(equalToConstant: 200.0),
origImageView.heightAnchor.constraint(equalToConstant: 120.0),
modImageView.topAnchor.constraint(equalTo: origImageView.bottomAnchor, constant: 10.0),
modImageView.centerXAnchor.constraint(equalTo: origImageView.centerXAnchor, constant: 0.0),
modImageView.widthAnchor.constraint(equalTo: origImageView.widthAnchor),
modImageView.heightAnchor.constraint(equalTo: origImageView.heightAnchor),
scrollView.topAnchor.constraint(equalTo: modImageView.bottomAnchor, constant: 10.0),
scrollView.centerXAnchor.constraint(equalTo: origImageView.centerXAnchor, constant: 0.0),
scrollView.widthAnchor.constraint(equalTo: g.widthAnchor, constant: -10.0),
scrollView.bottomAnchor.constraint(equalTo: g.bottomAnchor, constant: -10.0),
actualSizeImageView.topAnchor.constraint(equalTo: sg.topAnchor),
actualSizeImageView.bottomAnchor.constraint(equalTo: sg.bottomAnchor),
actualSizeImageView.leadingAnchor.constraint(equalTo: sg.leadingAnchor),
actualSizeImageView.trailingAnchor.constraint(equalTo: sg.trailingAnchor),
actualSizeImageView.widthAnchor.constraint(equalToConstant: bkgImage.size.width),
actualSizeImageView.heightAnchor.constraint(equalToConstant: bkgImage.size.height),
])
origImageView.image = bkgImage
let combinedImage = bkgImage.overlayWith(image: stickerImage, posX: 1000.0, posY: 1000.0)
modImageView.image = combinedImage
actualSizeImageView.image = combinedImage
}
}