Here is the image I wish to crop (to get rid of the options at the bottom. The Back, Draw and Delete are actual menu items, the ones above it are part of the image
this is the result of changing y: 100 and height : 1948
I want to remove the bottom 100 coordinates of an image. My application is on the iPad and all of the images are saved horizontally.
This code is one I took from stack overflow on a similar question, However it does not work for any values of x,y,width and height. The image is never cropped from the bottom.
Changing the values tends to only crop the image from the left and right (the 1536 pixel part of the iPad and not the 2048)
func cropImage(image: UIImage) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: 1536, height: 2048) // 1536 x 2048 pixels
let cgImage = image.cgImage!
let croppedCGImage = cgImage.cropping(to: rect)
return UIImage(cgImage: croppedCGImage!)
}
Does anyone know what is missing? All i need it to crop out the bottom part as the images are saves of a previous view (however the menu options appear in a a=stack view in the bottom which are still there when I save the image, hence the crop. Thanks
The "image I wish to crop" image you posted is 2048 x 1536 pixels...
If you want to crop the "bottom 100 pixels" your crop rect should be
CGRect(x: 0, y: 0, width: 2048, height: 1536 - 100)
Related
I'm trying to use UIGraphicsImageRenderer to fix the orientation of images before uploading to my web server, and the following code works but the images it produces have a higher PPI than I want:
let imageRendererFormat = UIGraphicsImageRendererFormat()
let imageRenderer = UIGraphicsImageRenderer(size: image.size, format: imageRendererFormat)
let pngData = imageRenderer.pngData(actions: { context in
let rect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
image.draw(in: rect)
})
The images that are produced have the correct resolution, but the PPI is 216 instead of the expected 72. I observed that the original image's scale is 1.0, but the scale of context.currentImage is 3.0. I'm not sure where this 3.0 number is coming from. This probably explains the 216 PPI, though, since 3 x 72 = 216.
How can I fix the PPI of the images being rendered? Should I apply a scale factor to the context or something?
I figured it out on my own - UIGraphicsImageRendererFormat has a 'scale' property which defaults to the same scale as the main screen.
I had to explicitly set the scale to 1.0 in order to get an image with the correct PPI.
I want to add a white border to the photo, while preserving the quality of the photo, so I use UIGraphicsImageRenderer to draw a white background, and then draw the photo up, the result is a dramatic increase in memory usage, is there any better way?
The resolution of the original picture is 4032 * 3024.
let renderer = UIGraphicsImageRenderer(size: CGSize(width: canvasSideLength, height: canvasSideLength))
let newImage = renderer.image { context in
UIColor.white.setFill()
context.fill(CGRect(x: 0, y: 0, width: canvasSideLength, height: canvasSideLength))
image.draw(in: CGRect(x: photoCanvasX, y: photoCanvasY, width: photoCanvasWidth, height: photoCanvasHeight))
}
When considering the memory used, don’t be misled by the size of the JPG or PNG file, because that is generally compressed. You will require four bytes per pixel when performing image operations in memory (e.g. width × height × 4, in pixels).
Worse, by default, UIGraphicsImageRenderer will generate images at screen resolution (e.g. potentially 2× or 3× depending upon your device). E.g. on a 3× device, consider:
let rect = CGRect(origin: .zero, size: CGSize(width: 8_519, height: 8_519))
let image = UIGraphicsImageRenderer(bounds: rect).image { _ in
UIColor.white.setFill()
UIBezierPath(rect: rect).fill()
}
print(image.cgImage!.width, "×", image.cgImage!.height)
That will print:
25557 × 25557
When you consider that it then takes 4 bytes per pixel, that adds up to 2.6gb. Even if the image is only 4,032 × 3,024, as suggested by your revised question, that’s still 439mb per image.
You may want to make sure to specify an explicit scale of 1:
let rect = CGRect(origin: .zero, size: CGSize(width: 8_519, height: 8_519))
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let image = UIGraphicsImageRenderer(bounds: rect, format: format).image { _ in
UIColor.white.setFill()
UIBezierPath(rect: rect).fill()
}
print(image.cgImage!.width, "×", image.cgImage!.height)
That will print, as you expected:
8519 × 8519
Then you are only going to require 290mb for the image. That’s still a lot of memory, but a lot less than if you use the default scale (on retina devices). Or, considering your revised 4,032 × 3,024 image, this 1× image could take only 49mb, 1/9th the size of the corresponding 3× image where you didn’t set the scale.
I am receiving an image from backend that is of a large size as i have to place the same image as profile picture and show the same image on bottom bar in tab bar of size 30x30. I tried to scale down image in various ways but nothing is working.
Tried Alamofire's method which also didn't worked(the image appears to be blurred and distorted):
func resizeImageWithoutDistortion(image: UIImage, size : CGSize) -> UIImage{
// 1. Scale image to size disregarding aspect ratio
let scaledImage = image.af_imageScaled(to: size)
// 2. Scale image to fit within specified size while maintaining aspect ratio
let aspectScaledToFitImage = image.af_imageAspectScaled(toFit: size)
// 3. Scale image to fill specified size while maintaining aspect ratio
let aspectScaledToFillImage = image.af_imageAspectScaled(toFill: size)
return scaledImage.roundImage()
}
Also tried as follows which also didn't worked:
func resizeImage(_ newWidth: CGFloat) -> UIImage {
let ratio = size.width / size.height
if ratio > 1 {
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newWidth))
draw(in: CGRect(x: newWidth * ratio / 2 - newWidth, y: 0, width: newWidth * ratio, height: newWidth))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!.roundImage()
} else {
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newWidth))
draw(in: CGRect(x: 0, y: (newWidth / ratio - newWidth) / 2 * (-1), width: newWidth, height: newWidth / ratio))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!.roundImage()
}
}
In the screenshot the image in bottom is very distorted.
Ishika, your problem is not the quality loss. Your problem is that you don't take the iOS Scaling into consideration.
Points and Pixels are not the same thing.
If you have an UIImageView in W: 30 H: 30 (points) to calculate your Image in pixels to show it clearly without affecting the quality, you need to have an Image pixel size of:
30 * UIScreen.main.scale = 60 pixels (if 2X scale)
or
30 * UIScreen.main.scale = 90 pixels (if 3x scales)
This is also the same reason why you need to provide iOS with #2x and #3x scaled images.
So if you want to resize your UIImage to a smaller size you need to take scaling into consideration. Otherwise your Images will be scaled to fill out your UIImageView and they will become blurry because the UIImageView is bigger than the UIImage size.
A good way to see this is if you set your yourImageView.contentMode = .Center you will notice the UIImage is smaller than the UIImageView itself.
I don't code in Swift, so I cant provide you with direct code ( to tired to translate) but if you look at other threads:
scale image to smaller size in swift3
You see that your UIGraphicsBeginImageContext is for example missing the scale input.
scale
The scale factor to apply to the bitmap. If you specify a value
of 0.0, the scale factor is set to the scale factor of the device’s
main screen.
Edit:
In your case, something like this:
newWidth = newWidth / UIScreen.main.scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: newWidth, height: newWidth), true, 0)
I am cropping an image with:
UIGraphicsBeginImageContext(croppingRect.size)
let context = UIGraphicsGetCurrentContext()
context?.clip(to: CGRect(x: 0, y: 0, width: rect.width, height: rect.height))
image.draw(at: CGPoint(x: -rect.origin.x, y: -rect.origin.y))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
The cropped image sometimes has a 1px white edge on the right and bottom border. The right bottom corner zoomed in to see the individual pixels looks like this. Apparently the border is not plain white but shades of white which may come from later compression.
Where does this white edge artifact come from?
The issue was that the values of the croppingRect were not full pixels.
As the values for x, y, width, height where calculated CGFloat numbers the results would sometimes be fractional numbers (e.g. 1.3 instead of 1.0). The solution was to round these values:
let cropRect = CGRect(
x: x.rounded(.towardZero),
y: y.rounded(.towardZero),
width: width.rounded(.towardZero),
height: height.rounded(.towardZero))
The rounding has to be .towardZero (see here what it means) because the cropping rectangle should (usually) not be larger than the image rect.
I've been trying to shrink down an image to a smaller size for a while and cannot figure out why it loses quality even though I've come across tutorials saying it should not. First, I crop my image into a square and then use this code:
let newRect = CGRectIntegral(CGRectMake(0,0, newSize.width, newSize.height))
let imageRef = image.CGImage
UIGraphicsBeginImageContextWithOptions(newSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, CGInterpolationQuality.High)
let flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height)
CGContextConcatCTM(context, flipVertical)
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef)
let newImageRef = CGBitmapContextCreateImage(context)! as CGImage
let newImage = UIImage(CGImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
I've also tried this code with the same results:
let newSize:CGSize = CGSize(width: 30,` height: 30)
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
UIBezierPath(
roundedRect: rect,
cornerRadius: 2
).addClip()
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let imageData = UIImageJPEGRepresentation(newImage, 1.0)
sharedInstance.my.append(UIImage(data: imageData!)!)
I still get a blurry image after resizing. I compare it to when I have an image view and set it to aspect fill/fit, and the image is much clearer and still smaller. That is the quality I'm trying to get and can't figure out what I'm missing. I put two pictures here, the first is the clearer image using an imageView and the second is a picture resized with my code. How can I manipulate an image to look clear like in the image View?
You should use
let newSize:CGSize = CGSize(width: 30 * UIScreen.mainScreen().scale, height: 30 * UIScreen.mainScreen().scale)
This is because different iPhones have different size.
select the image view, click "Size" inspector and change the "X",
"Y", "Width" and "Height" attributes.
X = 14
Y = 10
Width = 60
Height = 60
For the round radius you can implement this code:
cell.ImageView.layer.cornerRadius = 30.0
cell.ImageView.clipsToBounds = true
or
go to the Identity inspector, click the Add button (+) in the lower left of
the user defined runtime attributes editor.
Double click on the Key Path field of the new attribute to edit the key path for the attribute to layer.cornerRadius
Set the type to Number and
the value to 30. To make a circular image from a square image, the
radius is set to half the width of the image view.
Duncan gave you a good explanation 30 by 30 is too small that's why the pixels or the quality of the image is loss, I recommend you to use 60 by 60
In the sample images you show, you're drawing the "after" image larger than the starting image. If you reduce an image from some larger size to 30 pixels by 30 pixels, you are throwing away a lot of information. If you then draw the 30x30 image at a larger size, it's going to look bad. 30 by 30 is a tiny image, without much detail.