UIImageView cropping square - ios

First of all, my app asks the user to pick a photo from collectionview or taken camera pictures, then navigate to cropping image editor, so that our final image will be square according to cropping area position.But, the problem is there is few source detail about it. Without UIImagePicker, how is to be done?
I also tried taking square picture, with no success.
I implemented UIPinchGesture imageview so that user can zoom in or out, but there is no cropping square on the image view. I have to add cropping area.
This is cropping UIImage function:
func croppImageByRect() -> UIImage {
let ratio: CGFloat = 1 // square
let delta: CGFloat
let offSet: CGPoint
//make a new square size, that is the resized imaged width
let newSize = CGSizeMake(size.width, (size.width - size.width / 8))
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (size.width > size.height) {
delta = (ratio * size.width - ratio * size.height)
offSet = CGPointMake(delta / 2, 0)
} else {
delta = (ratio * size.height - ratio * size.width)
offSet = CGPointMake(0, delta / 2)
}
//make the final clipping rect based on the calculated values
let clipRect = CGRectMake(-offSet.x, -offSet.y, (ratio * size.width) + delta, (ratio * size.height) + delta)
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
UIGraphicsBeginImageContextWithOptions(newSize, true, 0.0)
UIRectClip(clipRect)
drawInRect(clipRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

Well, to start, this line:
// make a new square size, that is the resized imaged width
let newSize = CGSizeMake(size.width, (size.width - size.width / 8))
Will not result in a square size since width and height are not the same.

Related

Swift: Overlay NxN grid of buttons of aspect fit image

Tried searching for a while but couldn't find a way to do this.
I currently have a uiimage that is aspect fit. I'm trying to overlay a NxN grid of buttons over the image.
My thought process was to find the boundaries of the image in the imageview, and then overlay the buttons based off those boundaries. I usually use storyboards but am taking a stab at this programmatically as I couldn't figure out the aspect fit part in storyboards.
I found this code for finding the boundaries of my image after its been fit and did some print statements to be sure that it is finding the correct boundaries which I believe it is.
extension UIImageView {
var contentClippingRect: CGRect {
guard let image = image else { return bounds }
guard contentMode == .scaleAspectFit else { return bounds }
guard image.size.width > 0 && image.size.height > 0 else { return bounds }
let scale: CGFloat
if image.size.width > image.size.height {
scale = bounds.width / image.size.width
} else {
scale = bounds.height / image.size.height
}
let size = CGSize(width: image.size.width * scale, height: image.size.height * scale)
let x = (bounds.width - size.width) / 2.0
let y = (bounds.height - size.height) / 2.0
return CGRect(x: x, y: y, width: size.width, height: size.height)
}
}
I'm currently trying to initialize the stackview by passing the image.contentRectClipping frame to it. My understanding would be that this would create a frame based off the x,y,width and height and I would be good to go. However it places the stackview in the top left of the imageview and not the actual image. Any guidance would be greatly appreciated.

Calculate position in image after ScaleAspectFill in UIImageView

I'm loading images of various sizes inside an UIImageView. Some one the images come with annotations, each annotation has a unique position in the image. The annotation coordinates are relative to the original image size.
How do I transform the coordinates so the annotation show up at the same position in the UIImageView after resizing?
I could use AVMakeRectWithAspectRatioInsideRect to calculate the new position, but this only works with ScaleAspectFit and not ScaleAspectFill which I must use.
You need to:
calculate the aspectRatio for aspectFill
multiply the x,y coords for your annotation by the aspectRatio
adjust that point by the amount of the image that extends outside the view frame
This is one way to do it (a little verbose, for clarity):
func aspectFillPoint(for point: CGPoint, in view: UIImageView) -> CGPoint {
guard let img = view.image else {
return CGPoint.zero
}
// imgSize will be modified
var imgSize = img.size
let viewSize = view.frame.size
let aspectWidth = viewSize.width / imgSize.width
let aspectHeight = viewSize.height / imgSize.height
// calculate aspectFill ratio
let f = max(aspectWidth, aspectHeight)
// scale imgSize
imgSize.width *= f
imgSize.height *= f
// unless aspect ratio of view is the same as image,
// it will either extend above and below or left and right
// of the view frame
let xOffset = (viewSize.width - imgSize.width) / 2.0
let yOffset = (viewSize.height - imgSize.height) / 2.0
// scale the original point, and adjust for offsets
return CGPoint(
x: (point.x * f) + xOffset,
y: (point.y * f) + yOffset
)
}
Assuming you have an image assigned to theImageView, which is set to .scaleAspectFill, you can call it like this:
let annotationPoint = CGPoint(x: 232, y: 148)
var newPoint = aspectFillPoint(for: annotationPoint, in: theImageView)
// Note: newPoint is relative to the View Bounds, so unless the
// imageView is at 0,0 we need to adjust for position
newPoint.x += theImageView.frame.origin.x
newPoint.y += theImageView.frame.origin.y

How to resize UIImage programmatically to be like setting scaleAspectFill content mode?

I've found several examples to resize an image keeping its aspect ratio given a certain CGRect, or for example only the width like in this post. But those examples always create a new image that looks like in scaleAspectFit content mode. I would like to get one like in scaleAspectFill content mode but I don't find any example.
I have used this for one of my projects.
extension UIImage
{
func imageWithSize(size:CGSize) -> UIImage
{
var scaledImageRect = CGRect.zero
let aspectWidth:CGFloat = size.width / self.size.width
let aspectHeight:CGFloat = size.height / self.size.height
//max - scaleAspectFill | min - scaleAspectFit
let aspectRatio:CGFloat = max(aspectWidth, aspectHeight)
scaledImageRect.size.width = self.size.width * aspectRatio
scaledImageRect.size.height = self.size.height * aspectRatio
scaledImageRect.origin.x = (size.width - scaledImageRect.size.width) / 2.0
scaledImageRect.origin.y = (size.height - scaledImageRect.size.height) / 2.0
UIGraphicsBeginImageContextWithOptions(size, false, 0)
self.draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}

ios swift - Scaling image with image truncation

I'm passing an image to this method in order to scale my image and return an image that isn't horizontal which will be saved in the document directory; however, the method is somehow truncating maybe a quarter of an inch of the side of the image.
Please advise..
func scaleImageWithImage(image: UIImage, size:CGSize)-> UIImage{
let scale:CGFloat = max(size.width/image.size.width, size.height/image.size.height)
let width: CGFloat = image.size.width * scale
let height: CGFloat = image.size.height * scale
let imageRect: CGRect = CGRectMake((size.width-width)/2.0, (size.height - height) / 2.0, width, height)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
image.drawInRect(imageRect)
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
You are drawing in rect imageRect but the graphics context itself is of size size. Thus, if size is smaller than imageRect.size, you're losing some information at the edge. Moreover, imageRect doesn't start at 0,0 — its origin is (size.width-width)/2.0, (size.height-height)/2.0 — so if its origin is moved positively from the origin you will have blank areas at the other side, and if it is moved negatively from the origin you will lose some information at that edge.

How to improve sharpness when resizing UIImage?

In my app I need to upload photos on server, so before that, I want to resize and compress them to acceptable size. I tried to resize them in two ways, and the first way is:
// image is an instance of original UIImage that I want to resize
let width : Int = 640
let height : Int = 640
let bitsPerComponent = CGImageGetBitsPerComponent(image.CGImage)
let bytesPerRow = CGImageGetBytesPerRow(image.CGImage)
let colorSpace = CGImageGetColorSpace(image.CGImage)
let bitmapInfo = CGImageGetBitmapInfo(image.CGImage)
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), image.CGImage)
image = UIImage(CGImage: CGBitmapContextCreateImage(context))
The other way:
image = RBResizeImage(image, targetSize: CGSizeMake(640, 640))
func RBResizeImage(image: UIImage?, targetSize: CGSize) -> UIImage? {
if let image = image {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSizeMake(size.width heightRatio, size.height heightRatio)
} else {
newSize = CGSizeMake(size.width widthRatio, size.height widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
} else {
return nil
}
}
After that, I use UIImageJPEGRepresentation to compress UIImage, but even if compressionQuality is 1, photo is still blurry (that's visible on object edges mostly, maybe it's not a big deal, but photo is three to five times larger than same photo from Instagram, e.g. but doesn't have same sharpness). For 0.5 is even worse, of course, and photo is still larger (in KB) than same photo from Instagram.
Photo from my app, compressionQuality is 1, edges are blurry, and size is 341 KB
Photo from Instagram, edges are sharp, and size is 136 KB
EDIT:
Ok, but I'm little confused right now, I'm not sure what to do, to maintain aspect ratio? This is how I crop image (scrollView has UIImageView, so I can move and zoom image, and on the end, I'm able to crop visible part of scrollView which is sqare). Anyway, image from above was originally 2048x2048, but it's still blurry.
var scale = 1/scrollView.zoomScale
var visibleRect : CGRect = CGRect()
visibleRect.origin.x = scrollView.contentOffset.x * scale
visibleRect.origin.y = scrollView.contentOffset.y * scale
visibleRect.size.width = scrollView.bounds.size.width * scale
visibleRect.size.height = scrollView.bounds.size.height * scale
image = crop(image!, rect: visibleRect)
func crop(srcImage : UIImage, rect : CGRect) -> UIImage? {
var imageRef = CGImageCreateWithImageInRect(srcImage.CGImage, rect)
var cropped = UIImage(CGImage: imageRef)
return cropped
}
Your given code is wright but problem is u don't maintain the aspect ratio of image
as in your code you create a new rect as
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
if your given image of same height and width it will give smooth resize image but if height and width are different your image is blur . so try to maintain the aspect ratio
Reply to Edit question :
make height or width of crop image constant
For example if you make width as constant than use the following code
visibleRect.size.height = orignalImg.size.height * visibleRect.size.width / orignalImg.size.width
image = crop(image!, rect: visibleRect)

Resources