I'm trying to divide an image to create 16 out of it (in a matrix). I'm using swift 2.1. Here's the code:
let cellSize = Int(originalImage.size.height) / 4
for i in 0...4 {
for p in 0...4 {
let tmpImgRef: CGImage = originalImage.CGImage!
let rect: CGImage = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(CGFloat(i * cellSize), CGFloat(p * cellSize), CGFloat(cellSize), CGFloat(cellSize)))!
gameCells.append(cell)
}
}
This works but the images that returns are only part of the original image. I've been searching and I know that's because when I create a CGImage it has a size different than the UIImage, but I don't know how to fix it. If I could make the variable cellSize with the height of the CGImage instead of the UIImage I suppose I would fix it but I can't get the CGImage height.
Thanks for the help!
The fundamental issue is the difference between how UIImage and CGImage interpret their size. UIImage uses "points" and CGImage uses pixels. And the conversion factor is the scale.
For example, if a UIImage has a scale of 3, every "point" in any given direction the UIImage, there are three pixels in that direction in the underlying CGImage. So for a UIImage that has a scale of 3 and a size of 100x100 points, the underlying CGImage has a size of 300x300 pixels.
To return a simple array of images sliced by n x n (e.g. if n is three, there will be nine images in the array), you can do something like the following in Swift 3:
/// Slice image into array of tiles
///
/// - Parameters:
/// - image: The original image.
/// - howMany: How many rows/columns to slice the image up into.
///
/// - Returns: An array of images.
///
/// - Note: The order of the images that are returned will correspond
/// to the `imageOrientation` of the image. If the image's
/// `imageOrientation` is not `.up`, take care interpreting
/// the order in which the tiled images are returned.
func slice(image: UIImage, into howMany: Int) -> [UIImage] {
let width: CGFloat
let height: CGFloat
switch image.imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
width = image.size.height
height = image.size.width
default:
width = image.size.width
height = image.size.height
}
let tileWidth = Int(width / CGFloat(howMany))
let tileHeight = Int(height / CGFloat(howMany))
let scale = Int(image.scale)
var images = [UIImage]()
let cgImage = image.cgImage!
var adjustedHeight = tileHeight
var y = 0
for row in 0 ..< howMany {
if row == (howMany - 1) {
adjustedHeight = Int(height) - y
}
var adjustedWidth = tileWidth
var x = 0
for column in 0 ..< howMany {
if column == (howMany - 1) {
adjustedWidth = Int(width) - x
}
let origin = CGPoint(x: x * scale, y: y * scale)
let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
let tileCgImage = cgImage.cropping(to: CGRect(origin: origin, size: size))!
images.append(UIImage(cgImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
x += tileWidth
}
y += tileHeight
}
return images
}
Or, in Swift 2.3:
func slice(image image: UIImage, into howMany: Int) -> [UIImage] {
let width: CGFloat
let height: CGFloat
switch image.imageOrientation {
case .Left, .LeftMirrored, .Right, .RightMirrored:
width = image.size.height
height = image.size.width
default:
width = image.size.width
height = image.size.height
}
let tileWidth = Int(width / CGFloat(howMany))
let tileHeight = Int(height / CGFloat(howMany))
let scale = Int(image.scale)
var images = [UIImage]()
let cgImage = image.CGImage!
var adjustedHeight = tileHeight
var y = 0
for row in 0 ..< howMany {
if row == (howMany - 1) {
adjustedHeight = Int(height) - y
}
var adjustedWidth = tileWidth
var x = 0
for column in 0 ..< howMany {
if column == (howMany - 1) {
adjustedWidth = Int(width) - x
}
let origin = CGPoint(x: x * scale, y: y * scale)
let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
let tileCgImage = CGImageCreateWithImageInRect(cgImage, CGRect(origin: origin, size: size))!
images.append(UIImage(CGImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
x += tileWidth
}
y += tileHeight
}
return images
}
This makes sure that the resulting images are in the correct scale (this is why the above strides through the image in "points" and the multiplies to get the correct pixels in the CGImage). This also, if the dimensions, measured in "points") are not evenly divisible by n, it will making up the difference in the last image for that row or column, respectively. E.g. when you make three tiles for an image with a height of 736 points, the first two will be 245 points, but the last one will be 246 points).
There is one exception that this does not (entirely) handle gracefully. Namely, if the UIImage has an imageOrientation of something other than .up, the order in which the images is retrieved corresponds to that orientation, not the upper left corner of the image as you view it.
you can split your image im two parts vertically and horizontally and sub split the result as needed:
extension UIImage {
var topHalf: UIImage? {
guard let cgImage = cgImage, let image = cgImage.cropping(to: CGRect(origin: .zero, size: CGSize(width: size.width, height: size.height/2))) else { return nil }
return UIImage(cgImage: image, scale: scale, orientation: imageOrientation)
}
var bottomHalf: UIImage? {
guard let cgImage = cgImage, let image = cgImage.cropping(to: CGRect(origin: CGPoint(x: 0, y: CGFloat(Int(size.height)-Int(size.height/2))), size: CGSize(width: size.width, height: CGFloat(Int(size.height) - Int(size.height/2))))) else { return nil }
return UIImage(cgImage: image, scale: scale, orientation: imageOrientation)
}
var leftHalf: UIImage? {
guard let cgImage = cgImage, let image = cgImage.cropping(to: CGRect(origin: .zero, size: CGSize(width: size.width/2, height: size.height))) else { return nil }
return UIImage(cgImage: image, scale: scale, orientation: imageOrientation)
}
var rightHalf: UIImage? {
guard let cgImage = cgImage, let image = cgImage.cropping(to: CGRect(origin: CGPoint(x: CGFloat(Int(size.width)-Int((size.width/2))), y: 0), size: CGSize(width: CGFloat(Int(size.width)-Int((size.width/2))), height: size.height)))
else { return nil }
return UIImage(cgImage: image, scale: scale, orientation: imageOrientation)
}
var splitedInFourParts: [UIImage] {
guard case let topHalf = topHalf,
case let bottomHalf = bottomHalf,
let topLeft = topHalf?.leftHalf,
let topRight = topHalf?.rightHalf,
let bottomLeft = bottomHalf?.leftHalf,
let bottomRight = bottomHalf?.rightHalf else{ return [] }
return [topLeft, topRight, bottomLeft, bottomRight]
}
var splitedInSixteenParts: [UIImage] {
var array = splitedInFourParts.flatMap({$0.splitedInFourParts})
// if you need it in reading order you need to swap some image positions
swap(&array[2], &array[4])
swap(&array[3], &array[5])
swap(&array[10], &array[12])
swap(&array[11], &array[13])
return array
}
}
Splitting the image by columns and rows:
extension UIImage {
func matrix(_ rows: Int, _ columns: Int) -> [UIImage] {
let y = (size.height / CGFloat(rows)).rounded()
let x = (size.width / CGFloat(columns)).rounded()
var images: [UIImage] = []
images.reserveCapacity(rows * columns)
guard let cgImage = cgImage else { return [] }
(0..<rows).forEach { row in
(0..<columns).forEach { column in
var width = Int(x)
var height = Int(y)
if row == rows-1 && size.height.truncatingRemainder(dividingBy: CGFloat(rows)) != 0 {
height = Int(size.height - size.height / CGFloat(rows) * (CGFloat(rows)-1))
}
if column == columns-1 && size.width.truncatingRemainder(dividingBy: CGFloat(columns)) != 0 {
width = Int(size.width - (size.width / CGFloat(columns) * (CGFloat(columns)-1)))
}
if let image = cgImage.cropping(to: CGRect(origin: CGPoint(x: column * Int(x), y: row * Int(x)), size: CGSize(width: width, height: height))) {
images.append(UIImage(cgImage: image, scale: scale, orientation: imageOrientation))
}
}
}
return images
}
}
let myPicture = UIImage(data: try! Data(contentsOf: URL(string:"https://i.stack.imgur.com/Xs4RX.jpg")!))!
let images = myPicture.matrix(4, 4)
images.count // 16
I've used this to slice an image into a matrix. The matrix is represented as a 1D array.
func snapshotImage(image: UIImage, rect: CGRect) -> UIImage {
var imageRect: CGRect! = rect
if image.scale > 1.0 {
imageRect = CGRect(origin: CGPoint(x: rect.origin.x * image.scale, y: rect.origin.y * image.scale), size: CGSize(width: rect.size.width * image.scale, height: rect.size.height * image.scale))
}
let imageRef: CGImage = image.cgImage!.cropping(to: imageRect)!
let result: UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return result
}
func sliceImage(image: UIImage, size: CGSize) -> [UIImage] {
var slices: [UIImage] = [UIImage]()
var rect = CGRect(x: 0.0, y: 0.0, width: size.width, height: size.height)
var y: Float = 0.0
let width: Int = Int(image.size.width / size.width)
let height: Int = Int(image.size.height / size.height)
for _ in 0...height {
var x: Float = 0.0
for _ in 0...width {
rect.origin.x = CGFloat(x);
slices.append(self.snapshotImage(image: image, rect: rect))
x += Float(size.width)
}
y += Float(size.height)
rect.origin.y = CGFloat(y)
}
return slices
}
Related
I am having trouble cropping pictures taken to be of an exact size that is in the wide format. For instance I take a picture with an iPad front camera, which has the resolution of 960w,1280h and I need to crop to be exactly 875w,570h. I tried some of the methods in here, but they all stretch the image or don't get the size I want.
Here is the first method that I tried:
func cropToBounds(image: UIImage, width: Double, height: Double) -> UIImage {
let cgimage = image.cgImage!
let contextImage: UIImage = UIImage(cgImage: cgimage)
guard let newCgImage = contextImage.cgImage else { return contextImage }
let contextSize: CGSize = contextImage.size
var posX: CGFloat = 0.0
var posY: CGFloat = 0.0
let cropAspect: CGFloat = CGFloat(width / height)
var cropWidth: CGFloat = CGFloat(width)
var cropHeight: CGFloat = CGFloat(height)
if width > height { //Landscape
cropWidth = contextSize.width
cropHeight = contextSize.width / cropAspect
posY = (contextSize.height - cropHeight) / 2
} else if width < height { //Portrait
cropHeight = contextSize.height
cropWidth = contextSize.height * cropAspect
posX = (contextSize.width - cropWidth) / 2
} else { //Square
if contextSize.width >= contextSize.height { //Square on landscape (or square)
cropHeight = contextSize.height
cropWidth = contextSize.height * cropAspect
posX = (contextSize.width - cropWidth) / 2
}else{ //Square on portrait
cropWidth = contextSize.width
cropHeight = contextSize.width / cropAspect
posY = (contextSize.height - cropHeight) / 2
}
}
let rect: CGRect = CGRect(x: posX, y: posY, width: cropWidth, height: cropHeight)
// Create bitmap image from context using the rect
guard let imageRef: CGImage = newCgImage.cropping(to: rect) else { return contextImage}
// Create a new image based on the imageRef and rotate back to the original orientation
let cropped: UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
print(image.scale)
UIGraphicsBeginImageContextWithOptions(CGSize(width: width, height: height), false, 0.0)
cropped.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
let resized = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resized ?? image
}
This always stretches the image.
I thought about trying to cut the exact size I wanted, so I tried this:
func cropImage(image: UIImage, width: Double, height: Double)->UIImage{
let cgimage = image.cgImage!
let contextImage: UIImage = UIImage(cgImage: cgimage)
let contextSize: CGSize = contextImage.size
var posX: CGFloat = 0.0
var posY: CGFloat = 0.0
var recWidth : CGFloat = CGFloat(width)
var recHeight : CGFloat = CGFloat(height)
if width > height { //Landscape
posY = (contextSize.height - recHeight) / 2
}
else { //Square
posX = (contextSize.width - recWidth) / 2
}
let rect: CGRect = CGRect(x: posX, y: posY, width: recWidth, height: recHeight)
let imageRef:CGImage = cgimage.cropping(to: rect)!
print(imageRef.width)
print(imageRef.height)
let croppedimage:UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
print(croppedimage.size)
return croppedimage
}
But this resulted in an image with the opposite of what I want, 570w,875h. So I thought about inverting the values, but if I do that I get 605w, 570h. Maybe the problem is in how I get the X and Y positions of the image?
EDIT
Here is what I am doing now after the help of Leo Dabus:
extension UIImage {
func cropped(to size: CGSize) -> UIImage? {
guard let cgImage = cgImage?
.cropping(to: .init(origin: .init(x: (self.size.width-size.width)/2,
y: (self.size.height-size.height)/2),
size: size)) else { return nil }
let format = imageRendererFormat
return UIGraphicsImageRenderer(size: size, format: format).image {
_ in
UIImage(cgImage: cgImage, scale: 1, orientation: imageOrientation)
.draw(in: .init(origin: .zero, size: size))
}
}
}
This is how I call it:
let foto = UIImage(data: imageData)!
let size = CGSize(width: 875.0, height: 570.0)
let cropedPhoto = foto.cropped(to: size)
The imageData comes from a capture of the front camera.
And this is my capture code:
#objc func takePhoto(_ sender: Any?) {
let videoOrientation = AVCaptureVideoOrientation.portrait
stillImageOutput!.connection(with: .video)?.videoOrientation = videoOrientation
let settings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.jpeg])
let gesture = previewView.gestureRecognizers
previewView.removeGestureRecognizer(gesture![0])
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imageData = photo.fileDataRepresentation()
else { return }
}
You just need to get the original size width subtract the destination size width, divide by two and set the cropping origin x value. Next do the same with the height and set the y position. Then just initialize a new UIImage with the cropped cgImage:
extension UIImage {
func cropped(to size: CGSize) -> UIImage? {
guard let cgImage = cgImage?
.cropping(to: .init(origin: .init(x: (self.size.width - size.width) / 2,
y: (self.size.height - size.height) / 2),
size: size)) else { return nil }
return UIImage(cgImage: cgImage, scale: 1, orientation: imageOrientation)
}
}
let imageURL = URL(string: "https://www.comendochucruteesalsicha.com.br/wp-content/uploads/2016/09/IMG_5356-960x1280.jpg")!
let image = UIImage(data: try! Data(contentsOf: imageURL))!
let squared = image.cropped(to: .init(width: 875, height: 570))
I'm working on an app where I'm cropping an image.
Currently, this is how I crop it:
mainPicture.layer.cornerRadius = mainPicture.frame.size.width / 2
mainPicture.clipsToBounds = true
The request is not to crop it from the middle but rather to crop it in a specific radius and 12 px from the top.
I start with a normal image:
and when I currently crop it just gets cropped from the middle, so the result is like this:
The request is to crop it so that the top part of the circle will be 12 px from the top:
So that the final image would look like this:
How can this be done using Swift 4.0?
Here what you need to do is first crop the original image into a square image from top with the margin you want (like 20) and then set image to your Image view.
Here's a extension you can write on UIImage class for cropping:
extension UIImage {
func getCroppedImage(with topMargin: CGFloat) -> UIImage? {
let heightWidth = size.height < size.width ? size.height : size.width
let x = (size.width - heightWidth)/2
let rect = CGRect(x: x, y: topMargin, width: heightWidth, height: heightWidth)
if let imageRef = cgImage?.cropping(to: rect) {
return UIImage(cgImage: imageRef)
}
return nil
}
}
Then before setting the image to UIImageView call this method for your Image like:
let image = UIImage(named: "test")
imageView.image = image?.getCroppedImage(with: 20)
Output:
This is the input image:
This is the Output:
fixed it by cropping the image prior to posting it using this function:
func cropToBounds(image: UIImage, width: CGFloat, height: CGFloat) -> UIImage {
let cgimage = image.cgImage!
let contextImage: UIImage = UIImage(cgImage: cgimage)
let contextSize: CGSize = contextImage.size
var posX: CGFloat = 0.0
var posY: CGFloat = 0.0
var cgwidth: CGFloat = width
var cgheight: CGFloat = height
// See what size is longer and create the center off of that
if contextSize.width > contextSize.height {
posX = ((contextSize.width - contextSize.height) / 2)
posY = 0
cgwidth = contextSize.height
cgheight = contextSize.height
} else {
posX = 0
posY = (( contextSize.width - contextSize.height) / 2)
cgwidth = contextSize.width
cgheight = contextSize.width
}
let rect: CGRect = CGRect(x: posX, y: posY, width: cgwidth, height: cgheight)
// Create bitmap image from context using the rect
let imageRef: CGImage = cgimage.cropping(to: rect)!
// Create a new image based on the imageRef and rotate back to the original orientation
let image: UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return image
}
I am following this dicussion and got an array of images which are divided parts of the original image. If I print it, it looks like this:
[<UIImage: 0x61000008ea60>, {309, 212}, <UIImage: 0x61000008ec90>, {309, 212}, <UIImage: 0x61000008ebf0>, {309, 213}, <UIImage: 0x61000008ec40>, {309, 213}]
How could I use elements of this array? In the regular case I would have the name of the image and could use, for example, UIImage(named: ""). This array doesn't show me any names. Do you have any idea?
perhaps my mistake is here:
func setImage(){
testImage = UIImageView(frame: CGRect(x: 0, y: 0, width: size/1.5, height: size/1.5))
testImage.center = CGPoint(x: view.frame.width/2, y: view.frame.height/2)
slice(image: UIImage(named:"leopard_PNG14834")!, into: 8)
testImage.image = images[2]
view.addSubview(testImage)
}
Here is the function code
func slice(image: UIImage, into howMany: Int) -> [UIImage] {
let width: CGFloat
let height: CGFloat
switch image.imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
width = image.size.height
height = image.size.width
default:
width = image.size.width
height = image.size.height
}
let tileWidth = Int(width / CGFloat(howMany))
let tileHeight = Int(height / CGFloat(howMany))
let scale = Int(image.scale)
let cgImage = image.cgImage!
var adjustedHeight = tileHeight
var y = 0
for row in 0 ..< howMany {
if row == (howMany - 1) {
adjustedHeight = Int(height) - y
}
var adjustedWidth = tileWidth
var x = 0
for column in 0 ..< howMany {
if column == (howMany - 1) {
adjustedWidth = Int(width) - x
}
let origin = CGPoint(x: x * scale, y: y * scale)
let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
let tileCgImage = cgImage.cropping(to: CGRect(origin: origin, size: size))!
images.append(UIImage(cgImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
x += tileWidth
}
y += tileHeight
}
return images
}
If you're having a hard time adding your images to your ViewController, this is the common approach you would use to do so, assuming an input of an array of images:
let image = images[0]
let imageView = UIImageView(image: image)
self.view.addSubview(imageView)
EDIT
If you're messing with the frame of the image view, I would suggest doing it after you've initialized the UIImageView with an image like above. To test what's going on, perhaps just add an ImageView with your desired image without messing with the frame. Then check your variables that you are using to set the frame.
You already get an array with objects of type UIImage.
let images = slice(image: someOriginalImage, into: 4)
let firstImage = images[0] // firstImage is a UIImage
{309, 212} is a size of the image.
I get images that are in 1920x1080 resolution. I am attempting to crop them to the center square area (ie 1080x1080 square in the center). The resolution may change in the future. Would it be possible to crop the image this way? I have tried a couple things posted on SO, but I always get an image that is 660x1080 if I try to crop then center. Here is an image depicting what I want to achieve.
Basically the red is original, green is what I want, and yellow is just showing mixX and midY lines.
The code I tried.
func imageByCroppingImage(image: UIImage, toSize size: CGSize) -> UIImage{
let newCropWidth: CGFloat?
let newCropHeight: CGFloat?
if image.size.width < image.size.height {
if image.size.width < size.width {
newCropWidth = size.width
} else {
newCropWidth = image.size.width
}
newCropHeight = (newCropWidth! * size.height) / size.width
} else {
if image.size.height < size.height {
newCropHeight = size.height
} else {
newCropHeight = image.size.height
}
newCropWidth = (newCropHeight! * size.width) / size.height
}
let x = image.size.width / 2.0 - newCropWidth! / 2.0
let y = image.size.height / 2.0 - newCropHeight! / 2.0
let cropRect = CGRect(x: x, y: y, width: newCropWidth!, height: newCropHeight!)
let imageRef = image.cgImage!.cropping(to: cropRect)
let cropped = UIImage(cgImage: imageRef!, scale: 1.0, orientation: .right)
return cropped
}
And then I pass that method a CGSize(1080, 1080). I get an image that is 660, 1080. Any thoughts?
The intent is to crop the image, not to just display it centered.
You can use the same approach I used in this answer without the line UIBezierPath(ovalIn: breadthRect).addClip()
extension UIImage {
var isPortrait: Bool { size.height > size.width }
var isLandscape: Bool { size.width > size.height }
var breadth: CGFloat { min(size.width, size.height) }
var breadthSize: CGSize { .init(width: breadth, height: breadth) }
func squared(isOpaque: Bool = false) -> UIImage? {
guard let cgImage = cgImage?
.cropping(to: .init(origin: .init(x: isLandscape ? ((size.width-size.height)/2).rounded(.down) : 0,
y: isPortrait ? ((size.height-size.width)/2).rounded(.down) : 0),
size: breadthSize)) else { return nil }
let format = imageRendererFormat
format.opaque = isOpaque
return UIGraphicsImageRenderer(size: breadthSize, format: format).image { _ in
UIImage(cgImage: cgImage, scale: 1, orientation: imageOrientation)
.draw(in: .init(origin: .zero, size: breadthSize))
}
}
}
let imageURL = URL(string: "http://i.stack.imgur.com/Xs4RX.jpg")!
let image = UIImage(data: try! Data(contentsOf: imageURL))!
let squared = image.squared()
I want to crop images from the center with a specific width and height. I found this code in a SO issue but this method resize the image then it crop it. I want to only get my image cropped and not resized. I tried modifying this code but I can't get the result that I want.
//cropImage
func cropToBounds(image: UIImage, width: Double, height: Double) -> UIImage {
let contextImage: UIImage = UIImage(cgImage: image.cgImage!)
let contextSize: CGSize = contextImage.size
var posX: CGFloat = 0.0
var posY: CGFloat = 0.0
var cgwidth: CGFloat = CGFloat(width)
var cgheight: CGFloat = CGFloat(height)
// See what size is longer and create the center off of that
if contextSize.width > contextSize.height {
posX = ((contextSize.width - contextSize.height) / 2)
posY = 0
cgwidth = contextSize.height
cgheight = contextSize.height
} else {
posX = 0
posY = ((contextSize.height - contextSize.width) / 2)
cgwidth = contextSize.width
cgheight = contextSize.width
}
let rect: CGRect = CGRect(x: posX, y: posY, width: cgwidth, height: cgheight)
// Create bitmap image from context using the rect
let imageRef: CGImage = contextImage.cgImage!.cropping(to: rect)!
// Create a new image based on the imageRef and rotate back to the original orientation
let image: UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return image
}
How can I do that?
It should be:
func cropImage(toRect rect:CGRect) -> UIImage? {
var rect = rect
rect.origin.y = rect.origin.y * self.scale
rect.origin.x = rect.origin.x * self.scale
rect.size.width = rect.width * self.scale
rect.size.height = rect.height * self.scale
guard let imageRef = self.cgImage?.cropping(to: rect) else {
return nil
}
let croppedImage = UIImage(cgImage:imageRef)
return croppedImage
}
Make sure image will be cropped in the center.
Include largest crop zone possible given by the width and height.
extension UIImage {
func crop(width: CGFloat = 60, height: CGFloat = 60) -> UIImage {
let scale = min(self.size.width / width, self.size.height / height)
let x = self.size.width > self.size.height
? (self.size.width - width * scale) / 2
: 0
let y = self.size.width > self.size.height
? 0
: (self.size.height - height * scale) / 2
let cropZone = CGRect(x: x, y: y, width: width * scale, height: height * scale)
guard let image: CGImage = self.cgImage?.cropping(to: cropZone) else { return UIImage() }
return UIImage(cgImage: image)
}
I am using this to display image which ratio is different from my frame, so I center it and crop the sides.
extension UIImage {
func cropTo(size: CGSize) -> UIImage {
guard let cgimage = self.cgImage else { return self }
let contextImage: UIImage = UIImage(cgImage: cgimage)
var cropWidth: CGFloat = size.width
var cropHeight: CGFloat = size.height
if (self.size.height < size.height || self.size.width < size.width){
return self
}
let heightPercentage = self.size.height/size.height
let widthPercentage = self.size.width/size.width
if (heightPercentage < widthPercentage) {
cropHeight = size.height*heightPercentage
cropWidth = size.width*heightPercentage
} else {
cropHeight = size.height*widthPercentage
cropWidth = size.width*widthPercentage
}
let posX: CGFloat = (self.size.width - cropWidth)/2
let posY: CGFloat = (self.size.height - cropHeight)/2
let rect: CGRect = CGRect(x: posX, y: posY, width: cropWidth, height: cropHeight)
// Create bitmap image from context using the rect
let imageRef: CGImage = contextImage.cgImage!.cropping(to: rect)!
// Create a new image based on the imageRef and rotate back to the original orientation
let cropped: UIImage = UIImage(cgImage: imageRef, scale: self.scale, orientation: self.imageOrientation)
return cropped
}
}