Resizing image results in low resolution - ios

So I am trying to use a specific PNG image for my map annotation. The original image is 761 x 761 and the resized annotation image that shows up in my app is all blurry and low-resolution-looking. Any idea why that is?
chargerAnnotationImage = UIImage(named: "ChargerGreen")!
let size = CGSize(width: 25, height: 25)
UIGraphicsBeginImageContext(size)
chargerAnnotationImage.drawInRect(CGRectMake(0, 0, size.width, size.height))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resizedImage
Thank you!

Try this code for resize Image
The highest-level APIs for image resizing can be found in the UIKit framework. Given a UIImage, a temporary graphics context can be used to render a scaled version, using UIGraphicsBeginImageContextWithOptions() and UIGraphicsGetImageFromCurrentImageContext():
let image = UIImage(named: "x-men")!
let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.1, 0.1))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen
UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image.drawInRect(CGRect(origin: CGPointZero, size: size))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
UIGraphicsBeginImageContextWithOptions() creates a temporary rendering context into which the original is drawn. The first argument, size, is the target size of the scaled image. The second argument, isOpaque is used to determine whether an alpha channel is rendered. Setting this to false for images without transparency (i.e. an alpha channel) may result in an image with a pink hue. The third argument scale is the display scale factor. When set to 0.0, the scale factor of the main screen is used, which for Retina displays is 2.0 or higher (3.0 on the iPhone 6 Plus).

Related

UIImage scale factor not taken account of in UITabBarItem image

I am cropping a UIImage by using UIGraphicsGetCurrentContext and then saving it to cache. I then show that UIImage as the image of one of my tab bar items. This works, however, the scale factor of the image is an issue. If I use UIGraphicsBeginImageContextWithOptions with the scale set to zero, this correctly crops it using the scale factor of the screen. However, when I set the UITabBarItem image, it seems to ignore the fact that the image should be scaled.
My code for scaling:
extension UIImage {
func scaledImage(withSize size: CGSize, withBorder: Bool) -> UIImage {
let imageView = UIImageView(image: self)
imageView.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
imageView.contentMode = .scaleAspectFit
let layer = imageView.layer
layer.masksToBounds = true
layer.cornerRadius = size.width/2
if withBorder {
layer.borderColor = Styles.Colours.blue.colour.cgColor
layer.borderWidth = 2
}
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, 1) // Notice I've set the scale factor to 1 here for now. If I set it to 0 the image is too large on the tab bar
defer { UIGraphicsEndImageContext() }
layer.render(in: UIGraphicsGetCurrentContext()!)
return UIGraphicsGetImageFromCurrentImageContext()!
}
}
Used like this:
let defaultImage = image?.scaledImage(withSize: CGSize(width: 25, height: 25), withBorder: false)
Then I set the tab bar item like this:
self.tabBar.items?.last?.image = defaultImage.withRenderingMode(.alwaysOriginal)
If I was to set a UIImage from my Assets, then it would take into account the scale factor. How do I fix this? Thanks!
Solved it by converting the UIImage to one that specifically defines the scale like this:
let scaledSelectedImage = UIImage(data: UIImagePNGRepresentation(image)!, scale: UIScreen.main.scale)
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, 1) is saying that you want the image context to be set at a scale of #1x. If you set it to 0 it uses the screen's native scale:
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, 0)

Draw image of viewcontroller by UIGraphicsBeginImageContextWithOptions giving not completely loaded one

I am making a image collage app.
After collage the image, I'm trying to draw a output image from ViewController and upload to a site.
UIGraphicsBeginImageContextWithOptions(self.viewDFrame.bounds.size, NO, 0.0f);
[self.viewDFrame.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Sometime it can output a completely loaded viewcontroller but mostly it output not completely loaded one like this - with frame only.
This is actual output image and preview image that i wanted to get
I am using this code to generate single image. Swift 3.0 version
UIGraphicsBeginImageContextWithOptions(CGRect("Frame To get in context"), false, 0.0);
self.view.drawHierarchy(in: CGRect( "Frame Of View" ), afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
If you have any container view where you add frame and images.
self.viewContainer.drawHierarchy(in: CGRect( "Frame Of container view" ), afterScreenUpdates: true)
If you want to render image without showing view on screen and frame sizes are fix then refer this question link.
How to merge two UIImages?
var bottomImage = UIImage(named: "bottom.png")
var topImage = UIImage(named: "top.png")
var size = CGSize(width: 300, height: 300)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
bottomImage!.drawInRect(areaSize)
topImage!.drawInRect(areaSize, blendMode: kCGBlendModeNormal, alpha: 0.8)
var newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
I finally find out that i set Image using requestImageForAsset, this method is asynchronous so that the the renderInContext not wait for the image to be loaded. I set options.synchronous = YES; for PHImageRequestOptions and the image is loaded before rendering.

Resizing an image but preserving hard edges

I am using a piece of code from this link - Resize UIImage by keeping Aspect ratio and width, and it works perfectly, but I am wondering if it can be altered to preserve hard edges of pixels. I want to double the size of the image and keep the hard edge of the pixels.
class func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
What I want it to do
What it does
In Photoshop there is the nearest neighbour interpolation when resizing, is there something like that in iOS?
Inspired by the accepted answer, updated to Swift 5.
Swift 5
let image = UIImage(named: "Foo")!
let scale: CGFloat = 2.0
let newSize = image.size.applying(CGAffineTransform(scaleX: scale, y: scale))
UIGraphicsBeginImageContextWithOptions(newSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .none
let newRect = CGRect(origin: .zero, size: newSize)
image.draw(in: newRect)
let newImage = UIImage(cgImage: context.makeImage()!)
UIGraphicsEndImageContext()
Did a bit more digging and found the answer -
https://stackoverflow.com/a/25430447/4196903
but where
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
instead write
CGContextSetInterpolationQuality(context, CGInterpolationQuality.None)
You need to use CISamplerclass (Which is only available in iOS 9) and need to create your own custom image processing filter for it i think
You can find more information here and here too

ios swift - Scaling image with image truncation

I'm passing an image to this method in order to scale my image and return an image that isn't horizontal which will be saved in the document directory; however, the method is somehow truncating maybe a quarter of an inch of the side of the image.
Please advise..
func scaleImageWithImage(image: UIImage, size:CGSize)-> UIImage{
let scale:CGFloat = max(size.width/image.size.width, size.height/image.size.height)
let width: CGFloat = image.size.width * scale
let height: CGFloat = image.size.height * scale
let imageRect: CGRect = CGRectMake((size.width-width)/2.0, (size.height - height) / 2.0, width, height)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
image.drawInRect(imageRect)
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
You are drawing in rect imageRect but the graphics context itself is of size size. Thus, if size is smaller than imageRect.size, you're losing some information at the edge. Moreover, imageRect doesn't start at 0,0 — its origin is (size.width-width)/2.0, (size.height-height)/2.0 — so if its origin is moved positively from the origin you will have blank areas at the other side, and if it is moved negatively from the origin you will lose some information at that edge.

Image quality loss when reassigning image to UIImageView

I am developing an application which can tint colors in an image and export it.
I have added 3 versions of an image asset to my Assets.xcassets folder.
These are 3 different sizes of the same image, namely:
image1.png image1#2x.png image1#3x.png
with respective sizes: 256x341, 512x683, 768x1024 pixels.
I have created a UIImageView on my storyboard called myImage and assigned image1 to myImage via storyboard->utilities->attributes inspector-> Image View -> Image.
I am trying to use the following tintWithColor function as a UIImage extension to change the color of this image.
extension UIImage {
func tintWithColor(color:UIColor)->UIImage {
UIGraphicsBeginImageContext(self.size)
let context = UIGraphicsGetCurrentContext()
self.drawAtPoint(CGPoint(x: 0, y: 0))
CGContextSaveGState(context)
// flip the image
CGContextScaleCTM(context, 1.0, -1.0)
CGContextTranslateCTM(context, 0.0, -self.size.height)
// multiply blend mode
CGContextSetBlendMode(context, CGBlendMode.Multiply)
let rect = CGRectMake(0, 0, self.size.width, self.size.height)
CGContextClipToMask(context, rect, self.CGImage)
color.setFill()
CGContextFillRect(context, rect)
CGContextRestoreGState(context)
// create uiimage
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
As I am testing this function on my viewDidLoad() to change the color of myImage.image as below, I see that (on whatever device I am using) the width and height of my originalImage is always 256 x 341,5
override func viewDidLoad() {
super.viewDidLoad()
let originalImage = myImage.image
print("LOG: original image width: \(originalImage!.size.width) height: \(originalImage!.size.height)")
let tintedImage = originalImage!.tintWithColor(UIColor(hue: 300/360, saturation: 0.70, brightness: 0.70, alpha: 0.6))
myImage.image = tintedImage
// Do any additional setup after loading the view.
}
If I don't apply these changes on my image, the quality of my image on an iPhone 6 (4,7") device looks as below:
If I apply the tintWithColor and then reassign the resulting image to my imageView the quality seems to drop, lines smoothen, as can be seen below:
Is there a way to avoid this quality loss?
Eventually I will be exporting the high quality image, so I would like to apply this color tint function on the high quality version of this image.
Thank you.
UIGraphicsBeginImageContextWithOptions(CGSizeMake(self.size.width, self.size.height), false, 3.0)
should work

Resources