ios swift - Scaling image with image truncation - ios

I'm passing an image to this method in order to scale my image and return an image that isn't horizontal which will be saved in the document directory; however, the method is somehow truncating maybe a quarter of an inch of the side of the image.
Please advise..
func scaleImageWithImage(image: UIImage, size:CGSize)-> UIImage{
let scale:CGFloat = max(size.width/image.size.width, size.height/image.size.height)
let width: CGFloat = image.size.width * scale
let height: CGFloat = image.size.height * scale
let imageRect: CGRect = CGRectMake((size.width-width)/2.0, (size.height - height) / 2.0, width, height)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
image.drawInRect(imageRect)
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

You are drawing in rect imageRect but the graphics context itself is of size size. Thus, if size is smaller than imageRect.size, you're losing some information at the edge. Moreover, imageRect doesn't start at 0,0 — its origin is (size.width-width)/2.0, (size.height-height)/2.0 — so if its origin is moved positively from the origin you will have blank areas at the other side, and if it is moved negatively from the origin you will lose some information at that edge.

Related

Is there a need to resize a UIImage when setting it to an UIImageView to conserve memory?

When setting a UIImage to a UIImageView, I know that iOS automatically scales the image so that it fits within the ImageView. However, if I am loading a very large image file as a UIImage, will iOS automatically cut down the size of the UIImage (and therefore conserving memory) until it is just enough to fit the ImageView?Or do I have to implement the trimming of the image myself?
How do I trim the size of the UIImage if it is needed to do so?
It's optional to set the size of the image , the memory conservation will not be there if there if you set the size or not.Image will be set into the frame size of UIImage view.
This function will do the resizing of the image.
func ResizeImageToRequired(image: UIImage, targetSize: CGSize) -> UIImage {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSizeMake(size.width * heightRatio, size.height * heightRatio)
} else {
newSize = CGSizeMake(size.width * widthRatio, size.height * widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
here you can use the above function and set the size to required Width and Height in CGFloat.
self.ResizeImageToRequired(UIImage(named: "yourImageName")!, targetSize: CGSizeMake("width", "height"))

Resizing image results in low resolution

So I am trying to use a specific PNG image for my map annotation. The original image is 761 x 761 and the resized annotation image that shows up in my app is all blurry and low-resolution-looking. Any idea why that is?
chargerAnnotationImage = UIImage(named: "ChargerGreen")!
let size = CGSize(width: 25, height: 25)
UIGraphicsBeginImageContext(size)
chargerAnnotationImage.drawInRect(CGRectMake(0, 0, size.width, size.height))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resizedImage
Thank you!
Try this code for resize Image
The highest-level APIs for image resizing can be found in the UIKit framework. Given a UIImage, a temporary graphics context can be used to render a scaled version, using UIGraphicsBeginImageContextWithOptions() and UIGraphicsGetImageFromCurrentImageContext():
let image = UIImage(named: "x-men")!
let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.1, 0.1))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen
UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image.drawInRect(CGRect(origin: CGPointZero, size: size))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
UIGraphicsBeginImageContextWithOptions() creates a temporary rendering context into which the original is drawn. The first argument, size, is the target size of the scaled image. The second argument, isOpaque is used to determine whether an alpha channel is rendered. Setting this to false for images without transparency (i.e. an alpha channel) may result in an image with a pink hue. The third argument scale is the display scale factor. When set to 0.0, the scale factor of the main screen is used, which for Retina displays is 2.0 or higher (3.0 on the iPhone 6 Plus).

Resizing an image but preserving hard edges

I am using a piece of code from this link - Resize UIImage by keeping Aspect ratio and width, and it works perfectly, but I am wondering if it can be altered to preserve hard edges of pixels. I want to double the size of the image and keep the hard edge of the pixels.
class func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
What I want it to do
What it does
In Photoshop there is the nearest neighbour interpolation when resizing, is there something like that in iOS?
Inspired by the accepted answer, updated to Swift 5.
Swift 5
let image = UIImage(named: "Foo")!
let scale: CGFloat = 2.0
let newSize = image.size.applying(CGAffineTransform(scaleX: scale, y: scale))
UIGraphicsBeginImageContextWithOptions(newSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .none
let newRect = CGRect(origin: .zero, size: newSize)
image.draw(in: newRect)
let newImage = UIImage(cgImage: context.makeImage()!)
UIGraphicsEndImageContext()
Did a bit more digging and found the answer -
https://stackoverflow.com/a/25430447/4196903
but where
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
instead write
CGContextSetInterpolationQuality(context, CGInterpolationQuality.None)
You need to use CISamplerclass (Which is only available in iOS 9) and need to create your own custom image processing filter for it i think
You can find more information here and here too

UIImageView cropping square

First of all, my app asks the user to pick a photo from collectionview or taken camera pictures, then navigate to cropping image editor, so that our final image will be square according to cropping area position.But, the problem is there is few source detail about it. Without UIImagePicker, how is to be done?
I also tried taking square picture, with no success.
I implemented UIPinchGesture imageview so that user can zoom in or out, but there is no cropping square on the image view. I have to add cropping area.
This is cropping UIImage function:
func croppImageByRect() -> UIImage {
let ratio: CGFloat = 1 // square
let delta: CGFloat
let offSet: CGPoint
//make a new square size, that is the resized imaged width
let newSize = CGSizeMake(size.width, (size.width - size.width / 8))
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (size.width > size.height) {
delta = (ratio * size.width - ratio * size.height)
offSet = CGPointMake(delta / 2, 0)
} else {
delta = (ratio * size.height - ratio * size.width)
offSet = CGPointMake(0, delta / 2)
}
//make the final clipping rect based on the calculated values
let clipRect = CGRectMake(-offSet.x, -offSet.y, (ratio * size.width) + delta, (ratio * size.height) + delta)
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
UIGraphicsBeginImageContextWithOptions(newSize, true, 0.0)
UIRectClip(clipRect)
drawInRect(clipRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Well, to start, this line:
// make a new square size, that is the resized imaged width
let newSize = CGSizeMake(size.width, (size.width - size.width / 8))
Will not result in a square size since width and height are not the same.

How to improve sharpness when resizing UIImage?

In my app I need to upload photos on server, so before that, I want to resize and compress them to acceptable size. I tried to resize them in two ways, and the first way is:
// image is an instance of original UIImage that I want to resize
let width : Int = 640
let height : Int = 640
let bitsPerComponent = CGImageGetBitsPerComponent(image.CGImage)
let bytesPerRow = CGImageGetBytesPerRow(image.CGImage)
let colorSpace = CGImageGetColorSpace(image.CGImage)
let bitmapInfo = CGImageGetBitmapInfo(image.CGImage)
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), image.CGImage)
image = UIImage(CGImage: CGBitmapContextCreateImage(context))
The other way:
image = RBResizeImage(image, targetSize: CGSizeMake(640, 640))
func RBResizeImage(image: UIImage?, targetSize: CGSize) -> UIImage? {
if let image = image {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSizeMake(size.width heightRatio, size.height heightRatio)
} else {
newSize = CGSizeMake(size.width widthRatio, size.height widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
} else {
return nil
}
}
After that, I use UIImageJPEGRepresentation to compress UIImage, but even if compressionQuality is 1, photo is still blurry (that's visible on object edges mostly, maybe it's not a big deal, but photo is three to five times larger than same photo from Instagram, e.g. but doesn't have same sharpness). For 0.5 is even worse, of course, and photo is still larger (in KB) than same photo from Instagram.
Photo from my app, compressionQuality is 1, edges are blurry, and size is 341 KB
Photo from Instagram, edges are sharp, and size is 136 KB
EDIT:
Ok, but I'm little confused right now, I'm not sure what to do, to maintain aspect ratio? This is how I crop image (scrollView has UIImageView, so I can move and zoom image, and on the end, I'm able to crop visible part of scrollView which is sqare). Anyway, image from above was originally 2048x2048, but it's still blurry.
var scale = 1/scrollView.zoomScale
var visibleRect : CGRect = CGRect()
visibleRect.origin.x = scrollView.contentOffset.x * scale
visibleRect.origin.y = scrollView.contentOffset.y * scale
visibleRect.size.width = scrollView.bounds.size.width * scale
visibleRect.size.height = scrollView.bounds.size.height * scale
image = crop(image!, rect: visibleRect)
func crop(srcImage : UIImage, rect : CGRect) -> UIImage? {
var imageRef = CGImageCreateWithImageInRect(srcImage.CGImage, rect)
var cropped = UIImage(CGImage: imageRef)
return cropped
}
Your given code is wright but problem is u don't maintain the aspect ratio of image
as in your code you create a new rect as
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
if your given image of same height and width it will give smooth resize image but if height and width are different your image is blur . so try to maintain the aspect ratio
Reply to Edit question :
make height or width of crop image constant
For example if you make width as constant than use the following code
visibleRect.size.height = orignalImg.size.height * visibleRect.size.width / orignalImg.size.width
image = crop(image!, rect: visibleRect)

Resources