Image blurred slightly when drawing in drawRect (Swift) - ios

I am building a circle crop function in Swift. I pretty much have everything working however when I save the cropped photo it is slightly blurry:
Not sure if it is visible here or not but on my iPhone I can see a difference, slight blurring. I'm not zooming in more than the actual size of the image. I am using a UIScrollView with max zoom factor set to 1.0. The crop code is:
func didTapOk() {
let scale = scrollView.zoomScale
let newSize = CGSize(width: image!.size.width*scale, height: image!.size.height*scale)
let offset = scrollView.contentOffset
UIGraphicsBeginImageContext(CGSize(width: 240, height: 240))
let circlePath = UIBezierPath(ovalInRect: CGRect(x: 0, y: 0, width: 240, height: 240))
circlePath.addClip()
var sharpRect = CGRect(x: -offset.x, y: -offset.y, width: newSize.width, height: newSize.height)
sharpRect = CGRectIntegral(sharpRect)
image?.drawInRect(sharpRect)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIImageWriteToSavedPhotosAlbum(finalImage, nil, nil, nil)
}
Here I am trying to use CGRectIntegral to improve the result but it doesn't seem to make any difference. Any pointers on what I could do to improve this?

What's happening is your crop is blurring because it hasn't accounted for the scale of your screen (whether you're using a retina display etc).
Use UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), true, 0) to account for the retina screen. Check here for more details on how to use this function.

I suspect that, although the accepted answer above is working for you, it is only working due to the screen scale being the same as the image scale.
As you are not rendering your image from the screen itself (you're rendering from a UIImage to another UIImage), the screen scale should be irrelevant.
You should instead be passing in the scale of the image when you create your context, like so:
UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), false, image.scale)

Here's a way in Swift 3 to disable any interpolation.
UIGraphicsBeginImageContextWithOptions(imageSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = CGInterpolationQuality.none
Additionally, since this is often for pixel drawing purposes, you may want to add:
// Ensure the image is not drawn upside down
context?.saveGState()
context?.translateBy(x: 0.0, y: maxHeight)
context?.scaleBy(x: 1.0, y: -1.0)
// Make the drawing
context?.draw(yourImage, in: CGRect(x: 0, y: 0, width: maxWidth, height: maxHeight))
// Restore the GState of the context
context?.restoreGState()

Related

Understanding CGRect coordinate system and ios drawing

I am really confused about how a CGRect is interpreted and drawn. I have the following routine to draw a rectangle:
let rectangle = CGRect(x: 0, y: 0, width: 100, height: 100)
ctx.cgContext.setFillColor(UIColor.red.cgColor)
ctx.cgContext.setStrokeColor(UIColor.green.cgColor)
ctx.cgContext.setLineWidth(1)
ctx.cgContext.addRect(rectangle)
ctx.cgContext.drawPath(using: .fillStroke)
Now I was expecting this to draw a rectangle at the origin of the screen with the width and height of 100 (pixels/points, not sure about this).
However, I can barely see the drawn rectangle. Most of it seem to lie outside the screen (on the left hand side).
Changing this to:
let rectangle = CGRect(x: 50, y: 50, width: 100, height: 100)
Still around half the rectangle is outside the screen.
Is the origin of this CGRect at the bottom right? I am not sure how this is all getting interpreted.
I thought if the origin is at the center of the rectangle, then surely the second call should show the whole rectangle?
[EDIT]
I am running this on iphone X running ios 14.4.
[EDIT]
So, the full drawing code is as follows. This is part of a View. So, in the end the view image is assigned to the image we draw on
func show(on frame: CGImage) {
// This is of dimension 480 x 640
let dstImageSize = CGSize(width: frame.width, height: frame.height)
let dstImageFormat = UIGraphicsImageRendererFormat()
dstImageFormat.scale = 1
let renderer = UIGraphicsImageRenderer(size: dstImageSize,
format: dstImageFormat)
let dstImage = renderer.image { rendererContext in
// Draw the current frame as the background for the new image.
draw(image: frame, in: rendererContext.cgContext)
// This is where I draw my rectangle
let rectangle = CGRect(x: 0, y: 0, width: 100, height: 100)
rendererContext.cgContext.setAlpha(0.3)
rendererContext.cgContext.setFillColor(UIColor.red.cgColor)
rendererContext.cgContext.setStrokeColor(UIColor.green.cgColor)
//rendererContext.cgContext.setLineWidth(1)
rendererContext.cgContext.addRect(rectangle)
rendererContext.cgContext.drawPath(using: .fill)
}
image = dstImage
}
So, from what I can tell, it should draw it on the context of the image and should not go out of bounds with the parameters I gave.
I mean, the context is that of the bitmap that has been initialized to 480 x 640. I am nt sure why this is shown out of bounds when I view it on the device. Should this bitmap/image not be shown correctly?

Adding Text View to Image on iPhone X "squeezes" image on output

I currently having a block of code that is trying to add a text view on top of an image, with the ultimate goal to save down the new image with the overlaid text. Here is the code to do that:
class func addText(label: UITextView,imageSize: CGSize, image: UIImage) -> UIImage {
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: imageSize.width, height: imageSize.height), false, scale)
let currentView = UIView.init(frame: CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let currentImage = UIImageView.init(image: image)
currentImage.frame = CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height)
currentView.addSubview(currentImage)
currentView.addSubview(label)
currentView.layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
And it is called like below (The image is just a standard 1920x1080 image taken by the phone's camera):
self.imageToEdit.image = UIImage.addText(label: textView, imageSize: UIScreen.main.bounds.size, image: self.imageToEdit.image!)
This works great when I test when an iPhone 6s, but when I test on an iPhone X, it "squeezes" the sides of the image so faces and other features become skinnier on the image that is returned by addText.
I have a hunch it is due to the image being extended up through the notch of the iPhone X which is causing some type of scaling/aspect fill, but I'm not sure where to begin looking.
Does anyone know how to stop the "squeezing" from happening in iPhone X (I am also guessing this is happening in all the other iPhone models that have a notch)
Thanks.
Just figured it out!
I needed to included this line:
currentImage.contentMode = .scaleAspectFill
in my addText func.
Because I was returning a new UIImageView I needed to make sure it had the same content mode as the original view.

Draw UIImage with scale and translate

I am trying to draw a "zoomed-in" portion of UIImage and can't find a way to do that...
Say I have an image sized 1000x1000, and I zoom-in to the top left (x:0, y:0 width: 500, height: 500), and I want to draw that portion into another 1000x1000 image.
I am using UIGraphicsImageRenderer but UIImage draw method doesn't accept any source rect, only destination rect (and it draws the entire image).
I was able to achieve what I want by specifying a larger rect in the draw method, but that crashes when the zoom-in level is big (out-of-memory).
This is what I tried:
let srcImg: UIImage = {some UIImage sized 1000x1000}
let renderer = UIGraphicsImageRenderer(size: CGSize(width: 1000, height: 1000))
let scaled = renderer.image { ctx in
srcImg.draw(in: CGRect(origin: CGPoint.zero, size: CGSize(width: 2000, height: 2000)))
}
Basically I am trying to achieve something like the drawImage API of HTML5 canvas, which gets both src and dst rectangles...
Thanks.
Use the UIImage extension from this answer: https://stackoverflow.com/a/28513086/6257435
If I understand your question correctly -- you want to clip the top-left 500x500 pixels from a 1000x1000 image, and scale that up to 1000x1000 -- here is a simple example using that extension:
if let img1000x1000 = UIImage(named: "img1000x1000") {
if let topLeft500x500 = img1000x1000.cropped(to: CGRect(x: 0, y: 0, width: 500, height: 500)) {
if let new1000x1000 = topLeft500x500.filled(to: CGSize(width: 1000, height: 1000)) {
// do something with new 1000x1000 image
print()
}
}
}

Setting setMinimumTrackImage of UISlider corners with different color

I am currently working on a set of problem where I need to set the right and left corner of the UISlider with different color.
Currently, this is what I have got so far.
If you notice, on the right side if I don't alter with maximum track image I get the result, but when I start altering the minimum track image to have some color variation for certain placement of x, I don't get the rounded shape. I tried to visualize it on the Debug View Hierarchy and it does fall under the same layer.
Below is the code as of what I am trying to achieve.
let color = UIColor.green
color.setFill()
context.translateBy(x: 0, y: image.size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.clip(to: CGRect(x: 0, y: 0, width: 20, height: image.size.height), mask: image.cgImage!)
context.fill(CGRect(x: 0, y: 0, width: 20, height: image.size.height))
let coloredImg = UIGraphicsGetImageFromCurrentImageContext()
Is there anything I am missing here?

Why I get screenshot blurry with Swift?

I'm trying to get a screenshot, but my result image is a little bit blurry. How can I fix it and make it more clear?
let window = UIApplication.sharedApplication().delegate!.window!!
UIGraphicsBeginImageContextWithOptions(CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height - 80), true, UIScreen.mainScreen().scale)
window.drawViewHierarchyInRect(window.bounds, afterScreenUpdates: true)
let windowImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIGraphicsBeginImageContext(CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height - 145))
windowImage.drawAtPoint(CGPoint(x: -0, y: -65))
let croppedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(croppedImage, nil, nil, nil)
That's my code
Look at thats;
UIScreen.mainScreen().bounds.height - 80) UIScreen.mainScreen().bounds.height - 145))
windowImage.drawAtPoint(CGPoint(x: -0, y: -65))
Double check and change 80 and 145 and -0 , -65 i think different them in your main screen heights.
Thanks
You want to be using UIGraphicsBeginImageContextWithOptions() for both of your image contexts, and specifying the scale of the device's screen in both (or just input 0.0 to allow Core Graphics to auto-detect the screen scale).
UIGraphicsBeginImageContext() defaults to a scale of 1.0, so you're effectively drawing a high resolution image in your first context, then downsampling it in your second.

Resources