Why I get screenshot blurry with Swift? - ios

I'm trying to get a screenshot, but my result image is a little bit blurry. How can I fix it and make it more clear?
let window = UIApplication.sharedApplication().delegate!.window!!
UIGraphicsBeginImageContextWithOptions(CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height - 80), true, UIScreen.mainScreen().scale)
window.drawViewHierarchyInRect(window.bounds, afterScreenUpdates: true)
let windowImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIGraphicsBeginImageContext(CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height - 145))
windowImage.drawAtPoint(CGPoint(x: -0, y: -65))
let croppedImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(croppedImage, nil, nil, nil)
That's my code

Look at thats;
UIScreen.mainScreen().bounds.height - 80) UIScreen.mainScreen().bounds.height - 145))
windowImage.drawAtPoint(CGPoint(x: -0, y: -65))
Double check and change 80 and 145 and -0 , -65 i think different them in your main screen heights.
Thanks

You want to be using UIGraphicsBeginImageContextWithOptions() for both of your image contexts, and specifying the scale of the device's screen in both (or just input 0.0 to allow Core Graphics to auto-detect the screen scale).
UIGraphicsBeginImageContext() defaults to a scale of 1.0, so you're effectively drawing a high resolution image in your first context, then downsampling it in your second.

Related

Adding Text View to Image on iPhone X "squeezes" image on output

I currently having a block of code that is trying to add a text view on top of an image, with the ultimate goal to save down the new image with the overlaid text. Here is the code to do that:
class func addText(label: UITextView,imageSize: CGSize, image: UIImage) -> UIImage {
let scale = UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: imageSize.width, height: imageSize.height), false, scale)
let currentView = UIView.init(frame: CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
let currentImage = UIImageView.init(image: image)
currentImage.frame = CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height)
currentView.addSubview(currentImage)
currentView.addSubview(label)
currentView.layer.render(in: UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img!
}
And it is called like below (The image is just a standard 1920x1080 image taken by the phone's camera):
self.imageToEdit.image = UIImage.addText(label: textView, imageSize: UIScreen.main.bounds.size, image: self.imageToEdit.image!)
This works great when I test when an iPhone 6s, but when I test on an iPhone X, it "squeezes" the sides of the image so faces and other features become skinnier on the image that is returned by addText.
I have a hunch it is due to the image being extended up through the notch of the iPhone X which is causing some type of scaling/aspect fill, but I'm not sure where to begin looking.
Does anyone know how to stop the "squeezing" from happening in iPhone X (I am also guessing this is happening in all the other iPhone models that have a notch)
Thanks.
Just figured it out!
I needed to included this line:
currentImage.contentMode = .scaleAspectFill
in my addText func.
Because I was returning a new UIImageView I needed to make sure it had the same content mode as the original view.

UIImage convert to data and back changes size?

I create a picture programmatically, convert it to data and back and get different pictures.
let image1: UIImage = {
let size = CGSize(width: 50, height: 50)
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
UIColor.black.setFill()
UIRectFill(rect)
let image: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}()
let data = UIImagePNGRepresentation(image1)!
let image2 = UIImage(data: data)!
print(image1.size) // (50.0, 50.0)
print(image2.size) // (100.0, 100.0)
Please explain what happens and how to solve the problem. Thank you!
The "culprit" line:
UIGraphicsBeginImageContextWithOptions(size, false, 0)
Looking at the doc of UIGraphicsBeginImageContextWithOptions(), for the last parameter (scale)
scale The scale factor to apply to the bitmap. If you specify a value of 0.0, the scale factor is set to the scale factor of the
device’s main screen.
If your device is Retina (*2), then the scale factor will be 2.

How to use UIGraphicsBeginImageContextWithOptions to draw non-standard shapes?

I have an image which is a bubble shape, and I wish to add text to the middle of the image, so I tried to use UIGraphicsBeginImageContextWithOptions to redraw the image.
The code is as following:
let imageSize = CGSize(width: 40, height: 40)
// the rect in which the image will be drawn in
let imageRect = CGRect(origin: CGPoint.zero, size: imageSize)
UIGraphicsBeginImageContextWithOptions(imageSize, true, 1.0)
// begining drawing things
// first, we draw the image in the specified rect
image.draw(in: imageRect)
let attributes = [ NSAttributedStringKey.foregroundColor: UIColor.red,
NSAttributedStringKey.font: UIFont.boldSystemFont(ofSize: 20)]
let text = "55"
let size = text.size(withAttributes: attributes)
let rect = CGRect(x: 20 - size.width / 2, y: 20 - size.height / 2, width: size.width, height: size.height)
text.draw(in: rect, withAttributes: attributes)
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
but I get a square with a black background with the desired image inside:
the blue is the original image, the black is the re-drawn image. Anyone knows how can I draw the image as the original image?
Thanks!
You need to pass false, not true, to the 2nd argument (the opaque parameter) of the call to UIGraphicsBeginImageContextWithOptions function.
UIGraphicsBeginImageContextWithOptions(imageSize, false, 1.0)
You should also pass 0 as the 3rd argument (scale) so the image is scaled to match the current device's screen scale.

UIImage (Frame) and UIImage (Picture) merge

I have multiple sizes of frames, which can be hard coded, or server will decide. I have to select Image from Gallery, which definitely can be of many dimensions.
I am selecting Image from Gallery
I am generating white background UIImage, using code.
let size = CGSize(width: 424/2, height: 664/2)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.white.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let background_image: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Now, what I want, to make another Image, which keep Leading 20 pixel, Top 20 Pixel, and width and height 20 pixel smaller than original background.
How can I achieve it.
What I tried before coming to StackOverflow.
func mergedImageWith(frontImage:UIImage?, backgroundImage: UIImage?) -> UIImage{
if (backgroundImage == nil) {
return frontImage!
}
let size = CGSize(width: 424/2, height: 664/2)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
UIColor.white.setFill()
UIRectFill(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let backgroundImage2: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
backgroundImage2?.draw(in: CGRect.init(x: 0, y: 0, width: size.width, height: size.height))
frontImage?.draw(in: getAspectFillFrame(sizeImageView: size2, sizeImage: (frontImage?.size)!))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
here background image is created with aspect fill, but issue is of starting position and complete height and width.
In very simple words. Its like making custom frames and merge them with images(aspect fill) for printing.
can anyone help me out
Thanks.
Try not ending your image context until all of the images are drawn (I am also including some code that I have working, edited down a bit)
class layeredImageView: UIImageView {
var imageBackground:UIImage!
var imageForeground:UIImage!
UIGraphicsBeginImageContextWithOptions(self.frame.size, false, UIScreen.main.scale)
self.image?.draw(in: self.frame)
imageBackground.draw(in: CGRect(<rect>)
imageForeground.draw(in: CGRect(<rect>)
self.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}

Image blurred slightly when drawing in drawRect (Swift)

I am building a circle crop function in Swift. I pretty much have everything working however when I save the cropped photo it is slightly blurry:
Not sure if it is visible here or not but on my iPhone I can see a difference, slight blurring. I'm not zooming in more than the actual size of the image. I am using a UIScrollView with max zoom factor set to 1.0. The crop code is:
func didTapOk() {
let scale = scrollView.zoomScale
let newSize = CGSize(width: image!.size.width*scale, height: image!.size.height*scale)
let offset = scrollView.contentOffset
UIGraphicsBeginImageContext(CGSize(width: 240, height: 240))
let circlePath = UIBezierPath(ovalInRect: CGRect(x: 0, y: 0, width: 240, height: 240))
circlePath.addClip()
var sharpRect = CGRect(x: -offset.x, y: -offset.y, width: newSize.width, height: newSize.height)
sharpRect = CGRectIntegral(sharpRect)
image?.drawInRect(sharpRect)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIImageWriteToSavedPhotosAlbum(finalImage, nil, nil, nil)
}
Here I am trying to use CGRectIntegral to improve the result but it doesn't seem to make any difference. Any pointers on what I could do to improve this?
What's happening is your crop is blurring because it hasn't accounted for the scale of your screen (whether you're using a retina display etc).
Use UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), true, 0) to account for the retina screen. Check here for more details on how to use this function.
I suspect that, although the accepted answer above is working for you, it is only working due to the screen scale being the same as the image scale.
As you are not rendering your image from the screen itself (you're rendering from a UIImage to another UIImage), the screen scale should be irrelevant.
You should instead be passing in the scale of the image when you create your context, like so:
UIGraphicsBeginImageContextWithOptions(CGSize(width: 240, height: 240), false, image.scale)
Here's a way in Swift 3 to disable any interpolation.
UIGraphicsBeginImageContextWithOptions(imageSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = CGInterpolationQuality.none
Additionally, since this is often for pixel drawing purposes, you may want to add:
// Ensure the image is not drawn upside down
context?.saveGState()
context?.translateBy(x: 0.0, y: maxHeight)
context?.scaleBy(x: 1.0, y: -1.0)
// Make the drawing
context?.draw(yourImage, in: CGRect(x: 0, y: 0, width: maxWidth, height: maxHeight))
// Restore the GState of the context
context?.restoreGState()

Resources