Loose quality when render UIImageView in swift - ios

I am using below code in swift to capture a UIImageView into one image. It works but the image is not as same quality as the one showing on the UIImageView. Is there a way to configure the quality when capture this image?
private func getScreenshow(imageView:UIImageView) -> UIImage{
UIGraphicsBeginImageContext(self.imageView.frame.size)
let context = UIGraphicsGetCurrentContext()
imageView.layer.renderInContext(context!)
let screenShot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(screenShot, nil, nil, nil)
return screenShot
}

After some searching I figured out the issue. I use below code to replace the "UIGraphicsBeginImageContext" and it works.
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)

This code looks pretty weird (why don’t you just use imageView.image?) but I don’t know the full context of your use case.
As you found, the reason for the loss of quality is you are ignoring the screen’s retina scale.
Read the documentation for UIGraphicsBeginImageContext and UIGraphicsBeginImageContextWithOptions and you’ll see the former uses a ‘scale factor of 1.0’.

Related

Get Visible portion Image from UIimageView in Scrollview

I have a UIImageView in a UIScrollView in which can be zoomed in and out. Now, after the user has selected the specific content to be zoomed in, I want to crop that part of image present on the scrollview and get it in the form on UIImage.
For that I am using
extension UIScrollView {
var snapshotVisibleArea: UIImage? {
UIGraphicsBeginImageContext(bounds.size)
UIGraphicsGetCurrentContext()?.translateBy(x: -contentOffset.x, y: -contentOffset.y)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
But when I implement this, the quality of the image get extremely degraded. Even If I use a 4K image, the final product looks like a 360p resolution.
This logic is just basic capturing of the screen content.
I know there can be a better way but I am not able to find a solution.
Any help is highly appreciated.
You can try this:
let context:CGContext = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .high
Also I'm not sure but image quality could be improve if you initialize image context with this code: UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)

How to downsize an image using bilinear filtering in Swift/iOS?

I need to resize an image using bilinear filtering but not sure how this can be done in Swift. In Python, PIL as well as Opencv allows us to choose an interpolation method when resizing (https://docs.scipy.org/doc/scipy/reference/generated/scipy.misc.imresize.html). I was hoping to mimic the process in iOS for a coreML model
I searched around for resizing methods but this is what I was able to find via a blog:
extension UIImage {
func resizeUI(size:CGSize) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(size, true, self.scale)
self.drawInRect(CGRect(origin: CGPointZero, size: size))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resizedImage
}
However, it doesn't seem to allow you to choose the interpolation methods.
Thank you.
EDIT:
I found imageBySamplingLinear, but this method doesn't allow you to resize the image using an interpolation method. It just takes the current sized image and interpolates it?
https://developer.apple.com/documentation/coreimage/ciimage/2867346-imagebysamplinglinear?language=objc
While the real algorithm resizes the image using the interpolation:
https://rosettacode.org/wiki/Bilinear_interpolation#Python

Swift how to place pictures on top of pictures

I would like to make an app which enables you to take a photo and then choose from a set of pre made "pictures" as you will to apply on top of that photo.
For example, you take a photo of someone and then apply a mustage, a chicken in it and fake lips.
App example is Aokify app.
However searched all corners of the internet but can't find an example that points me in the right direction.
Another more simple implementation may be to use a UIImageView as a parent view, then add a UIImageView as a subview for any images you wish to overlay on top of the original.
let mainImage = UIImage(named:"main-pic")
let overlayImage = UIImage(named:"overlay")
var mainImageView = UIImageView(image:mainImage)
var overlayImageView = UIImageView(image:overlayImage)
self.view.addSubview(mainImageView)
mainImageview.addSubview(overlayImageView)
Edit: Since this has become the accepted answer, I feel it is worth mentioning that there are also different options for positioning the overlayImageView: you can add the overlay to the same parent after the first view has been added, or you can add the overlay as a subview of the main imageView as the example demonstrates.
The difference is the frame of reference when setting the coordinates for your overlay frame: whether you want them to have the same coordinate space, or whether you want the overlay coordinates to be relative to the main image rather than the parent.
For answering the question properly and fulfilling the requirement, you will need to add option for moving and placing the overlay image at proper position according to the original image but the code for adding one image over another image will be the following one-
For Swift3
extension UIImage {
func overlayed(with overlay: UIImage) -> UIImage? {
defer {
UIGraphicsEndImageContext()
}
UIGraphicsBeginImageContextWithOptions(size, false, scale)
self.draw(in: CGRect(origin: CGPoint.zero, size: size))
overlay.draw(in: CGRect(origin: CGPoint.zero, size: size))
if let image = UIGraphicsGetImageFromCurrentImageContext() {
return image
}
return nil
}
}
Usage-
image.overlayed(with: overlayImage)
Also available here as a gist.
The code was originally written to answer this question.
Thanks to jesses.co.tt for providing the hint i needed.
The method is called UIGraphicsContext.
And the tutorial i finally found that did it: https://www.youtube.com/watch?v=m1QnT72I6f0 it's by thenewboston.

Text View Screenshot in Swift?

I found a source code about view screenshot. I changed it a little and tried. But this code has a little problem. Screenshot resolution is really bad. I need a good resolution screenshot. I tried to add a comment, but I'm new on stackoverflow. Anyway, what can I do for this ?
Link : Screenshot in swift iOS?
My code :
func textViewSS() {
//Create the UIImage
UIGraphicsBeginImageContext(textView.frame.size)
textView.layer.renderInContext(UIGraphicsGetCurrentContext())
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//Save it to the camera roll
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
Sample Result :
http://i60.tinypic.com/s4wdn4.png
Trying modifying the first line in your code to pass the scale if you are not satisfied with the resolution
UIGraphicsBeginImageContextWithOptions(textView.frame.size, false, UIScreen.mainScreen().scale)
I don't know the requirement in your case but drawViewHierarchyInRect is quicker/cheaper than renderInContext. You may want to consider that if it is applicable.

Resolution Loss when generating an Image from UIImageView

Okay, sorry if the title is a little confusing. Basically I am trying get the image/subviews of the image view and combine them into a single exportable UIImage.
Here is my current code, however it has a large resolution loss.
func generateImage() -> UIImage{
UIGraphicsBeginImageContext(environmentImageView.frame.size)
var context : CGContextRef = UIGraphicsGetCurrentContext()
environmentImageView.layer.renderInContext(context)
var img : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
You have to set the scale of the context to be retina.
UIGraphicsBeginImageContextWithOptions(environmentImageView.frame.size, false, 0)
0 means to use the scale of the screen which will work for non-retina devices as well.

Resources