I am using a piece of code from this link - Resize UIImage by keeping Aspect ratio and width, and it works perfectly, but I am wondering if it can be altered to preserve hard edges of pixels. I want to double the size of the image and keep the hard edge of the pixels.
class func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
What I want it to do
What it does
In Photoshop there is the nearest neighbour interpolation when resizing, is there something like that in iOS?
Inspired by the accepted answer, updated to Swift 5.
Swift 5
let image = UIImage(named: "Foo")!
let scale: CGFloat = 2.0
let newSize = image.size.applying(CGAffineTransform(scaleX: scale, y: scale))
UIGraphicsBeginImageContextWithOptions(newSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .none
let newRect = CGRect(origin: .zero, size: newSize)
image.draw(in: newRect)
let newImage = UIImage(cgImage: context.makeImage()!)
UIGraphicsEndImageContext()
Did a bit more digging and found the answer -
https://stackoverflow.com/a/25430447/4196903
but where
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
instead write
CGContextSetInterpolationQuality(context, CGInterpolationQuality.None)
You need to use CISamplerclass (Which is only available in iOS 9) and need to create your own custom image processing filter for it i think
You can find more information here and here too
Related
I'm perplexed. My app allows the user to takes photos with the camera/photo album, then at the end of the app those photos are printed into a PDF (using PDFKit). I've tried NUMEROUS ways to resize the photos so that the PDF is not so large (8mb for ~13 photos). I can't get it working, however!
Here's one solution I tried:
func resizeImage(image: UIImage) -> UIImage {
if image.size.height >= 1024 && image.size.width >= 1024 {
UIGraphicsBeginImageContext(CGSize(width: 1024, height: 1024))
image.draw(in: CGRect(x: 0, y: 0, width: 1024, height: 1024))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.height >= 1024 && image.size.width < 1024 {
UIGraphicsBeginImageContext(CGSize(width: image.size.width, height: 1024))
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: 1024))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.width >= 1024 && image.size.height < 1024 {
UIGraphicsBeginImageContext(CGSize(width: 1024, height: image.size.height))
image.draw(in: CGRect(x: 0, y: 0, width: 1024, height: image.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else {
return image
}
}
And I call it like this:
let theResizedImage = resizeImage(image: image)
This works... sorta. It resizes the image to about a quarter of what it was before (still not great... but better). Here's the issue though: when I draw "theResizedImage" to the PDF...
theResizedImage.draw(in: photoRect)
...the PDF file ends up being TWICE as large as before!! How?!?!?! I don't understand how in the world an image that is resized is all-of-a-sudden two-times larger than it was in its original form as soon as it is drawn onto the PDF.
Someone please help me out with this. If you've got a better way to do it that allows for the image to be resized even further, then fantastic! I'm not sure if you're familiar with a software called pdfFactoryPro (on PC), but it will resize a HUGE (like 40 something mb) file into one that's like... 3 mb. I need these images resized big time. I need them to keep that small size when printed to the PDF, so that the PDF is small. The other option would be if you know some way to resize the PDF itself, like pdfFactoryPro does.
PLEASE explain your answers thoroughly and with code, or a link to a tutorial. I'm a beginner... if you couldn't tell.
NOTE: Kindly make sure your code response is set up in a way that passes an image to the function... not a URL. As I stated earlier, I'm resizing photos that the user took with the camera, accessed from within the app (imagePickerController).
Thanks!
EDIT: I found out through a lot of Googling that even though you resize the image using UIImageJPEGRepresentation, when you print the image to the PDF, the RAW image is printed instead of the compressed image.
Someone named "Michael McNabb" posted this solution, but it is really out-dated, due to being in 2013:
NSData *jpegData = UIImageJPEGRepresentation(sourceImage, 0.75);
CGDataProviderRef dp = CGDataProviderCreateWithCFData((__bridge CFDataRef)jpegData);
CGImageRef cgImage = CGImageCreateWithJPEGDataProvider(dp, NULL, true, kCGRenderingIntentDefault);
[[UIImage imageWithCGImage:cgImage] drawInRect:drawRect];
Can someone please translate this to Swift 5 for me? It may be the solution to my question.
func draw(image: UIImage, in rect: CGRect) {
guard let jpegData = image.jpegData(compressionQuality: 0.75),
let dataProvider = CGDataProvider(data: jpegData as CFData),
let cgImage = CGImage(jpegDataProviderSource: dataProvider, decode: nil, shouldInterpolate: true, intent: .defaultIntent) else {
return
}
UIImage(cgImage: cgImage).draw(in: rect)
}
Turns out it's not just one task to resize images for printing to a PDF. It's two tasks.
The first task is resizing the photo. For example, if you don't want your picture more than 1MB, then you resize your photo to 1024x1024. If you'd prefer the pixel quality be able to be even lower, then you can change it based on your needs.
Resizing example:
func resizeImage(image: UIImage) -> UIImage {
let maxSize = CGFloat(768)
if image.size.height >= maxSize && image.size.width >= maxSize {
UIGraphicsBeginImageContext(CGSize(width: maxSize, height: maxSize))
image.draw(in: CGRect(x: 0, y: 0, width: maxSize, height: maxSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.height >= maxSize && image.size.width < maxSize {
UIGraphicsBeginImageContext(CGSize(width: image.size.width, height: maxSize))
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: maxSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.width >= maxSize && image.size.height < maxSize {
UIGraphicsBeginImageContext(CGSize(width: maxSize, height: image.size.height))
image.draw(in: CGRect(x: 0, y: 0, width: maxSize, height: image.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else {
return image
}
}
Disclaimer: I learned this code from another person on StackOverflow. I don't remember his name, but credits to him.
Second, you then compress the resized image. I don't fully understand compression, but I set mine to something really low haha – do as you will.
Compression example:
func draw(image: UIImage, in rect: CGRect) {
guard let oldImageData = image.jpegData(compressionQuality: 1) else {
return
}
print("The size of the oldImage, before compression, is: \(oldImageData.count)")
guard let jpegData = image.jpegData(compressionQuality: 0.05),
let dataProvider = CGDataProvider(data: jpegData as CFData),
let cgImage = CGImage(jpegDataProviderSource: dataProvider, decode: nil, shouldInterpolate: true, intent: .defaultIntent) else {
return
}
let newImage = UIImage(cgImage: cgImage)
print("The size of the newImage printed on the pdf is: \(jpegData.count)")
newImage.draw(in: rect)
}
Disclaimer: I learned this code from someone named Michael McNabb. Credit goes to him.
I went ahead and drew the image to the PDF at the end of that function. These two things combined took my PDF from being about 24MB or so to about 570 KB.
For a UIImageView, I am using the aspect fill content mode. It's OK, but for some images, it will cut from the top because I used clipsToBounds = true. So here is what I want: I want to make the two filters active at the same time, for example:
Here is an image view that I set to aspect fill:
...and an image view I set using contentMode = .top:
So I want to merge these two content modes. Is it possible? Thanks in advance.
Update: device scaling is now properly handled, thanks to budidino for that!
You should resize the image, so that it will have the width of your image view, but by keeping its aspect ratio. After that, set the image view's content mode to .top and enable clipping to bounds for it.
The resizeTopAlignedToFill function is a modified version of this answer.
func setImageView() {
imageView.contentMode = .top
imageView.clipsToBounds = true
let image = <custom image>
imageView.image = image.resizeTopAlignedToFill(newWidth: imageView.frame.width)
}
extension UIImage {
func resizeTopAlignedToFill(newWidth: CGFloat) -> UIImage? {
let newHeight = size.height * newWidth / size.width
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContextWithOptions(newSize, false, UIScreen.main.scale)
draw(in: CGRect(origin: .zero, size: newSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
Try this:
imageView.contentMode = UIViewContentModeTop;
imageView.image = [UIImage imageWithCGImage:image.CGImage scale:image.size.width / imageView.frame.size.width orientation:UIImageOrientationUp];
This is my solution. First, you set comtentMode to top then you resize image
func resize(toWidth scaledToWidth: CGFloat) -> UIImage {
let image = self
let oldWidth = image.size.width
let scaleFactor = scaledToWidth / oldWidth
let newHeight = image.size.height * scaleFactor
let newWidth = oldWidth * scaleFactor
let scaledSize = CGSize(width:newWidth, height:newHeight)
UIGraphicsBeginImageContextWithOptions(scaledSize, false, 0)
image.draw(in: CGRect(x: 0, y: 0, width: scaledSize.width, height: scaledSize.height))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
imageView.contentMode = .top
imageView.image = imageView.resize(toWidth:imageView.frame.width)
The accepted answer was scaling down the image and therefore lowering the quality on #2x and #3x devices. This should produce the same result with better image quality:
extension UIImage {
func resizeTopAlignedToFill(containerSize: CGSize) -> UIImage? {
let scaleTarget = containerSize.height / containerSize.width
let scaleOriginal = size.height / size.width
if scaleOriginal <= scaleTarget { return self }
let newHeight = size.width * scaleTarget
let newSize = CGSize(width: size.width, height: newHeight)
UIGraphicsBeginImageContextWithOptions(newSize, false, scale)
self.draw(in: CGRect(origin: .zero, size: newSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
then to use it just:
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
imageView.image = UIImage(named: "portrait").resizeTopAlignedToFill(containerSize: imageView.frame.size)
as a matter of performance I would suggest to put the UIImageView with the full image size / aspect ratio into a ContainerView with clipsToBounds set to true.
I have Data that contains an animated GIF and i wanted to know how to reduce it`s size before uploading it to the server. I am preferably looking for a pod that would do this but, i have looked everywhere on the internet and i couldn`t find anything. For a jpeg i usually use the function bellow after i create a uiimage from data:
func scaleImage(image:UIImage, newWidth:CGFloat) -> UIImage {
let scale = newWidth / image.size.width
let newHeight = image.size.height * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
In my app I need to upload photos on server, so before that, I want to resize and compress them to acceptable size. I tried to resize them in two ways, and the first way is:
// image is an instance of original UIImage that I want to resize
let width : Int = 640
let height : Int = 640
let bitsPerComponent = CGImageGetBitsPerComponent(image.CGImage)
let bytesPerRow = CGImageGetBytesPerRow(image.CGImage)
let colorSpace = CGImageGetColorSpace(image.CGImage)
let bitmapInfo = CGImageGetBitmapInfo(image.CGImage)
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), image.CGImage)
image = UIImage(CGImage: CGBitmapContextCreateImage(context))
The other way:
image = RBResizeImage(image, targetSize: CGSizeMake(640, 640))
func RBResizeImage(image: UIImage?, targetSize: CGSize) -> UIImage? {
if let image = image {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSizeMake(size.width heightRatio, size.height heightRatio)
} else {
newSize = CGSizeMake(size.width widthRatio, size.height widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
} else {
return nil
}
}
After that, I use UIImageJPEGRepresentation to compress UIImage, but even if compressionQuality is 1, photo is still blurry (that's visible on object edges mostly, maybe it's not a big deal, but photo is three to five times larger than same photo from Instagram, e.g. but doesn't have same sharpness). For 0.5 is even worse, of course, and photo is still larger (in KB) than same photo from Instagram.
Photo from my app, compressionQuality is 1, edges are blurry, and size is 341 KB
Photo from Instagram, edges are sharp, and size is 136 KB
EDIT:
Ok, but I'm little confused right now, I'm not sure what to do, to maintain aspect ratio? This is how I crop image (scrollView has UIImageView, so I can move and zoom image, and on the end, I'm able to crop visible part of scrollView which is sqare). Anyway, image from above was originally 2048x2048, but it's still blurry.
var scale = 1/scrollView.zoomScale
var visibleRect : CGRect = CGRect()
visibleRect.origin.x = scrollView.contentOffset.x * scale
visibleRect.origin.y = scrollView.contentOffset.y * scale
visibleRect.size.width = scrollView.bounds.size.width * scale
visibleRect.size.height = scrollView.bounds.size.height * scale
image = crop(image!, rect: visibleRect)
func crop(srcImage : UIImage, rect : CGRect) -> UIImage? {
var imageRef = CGImageCreateWithImageInRect(srcImage.CGImage, rect)
var cropped = UIImage(CGImage: imageRef)
return cropped
}
Your given code is wright but problem is u don't maintain the aspect ratio of image
as in your code you create a new rect as
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
if your given image of same height and width it will give smooth resize image but if height and width are different your image is blur . so try to maintain the aspect ratio
Reply to Edit question :
make height or width of crop image constant
For example if you make width as constant than use the following code
visibleRect.size.height = orignalImg.size.height * visibleRect.size.width / orignalImg.size.width
image = crop(image!, rect: visibleRect)
I need to scale down an image, but in a sharp way. In Photoshop for example there are the image size reduction options "Bicubic Smoother" (blurry) and "Bicubic Sharper".
Is this image downscaling algorithm open sourced or documented somewhere or does the SDK offer methods to do this?
Merely using imageWithCGImage is not sufficient. It will scale, but the result will be blurry and suboptimal whether scaling up or down.
If you want to get the aliasing right and get rid of the "jaggies" you need something like this: http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/.
My working test code looks something like this, which is Trevor's solution with one small adjustment to work with my transparent PNGs:
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
For those using Swift here is the accepted answer in Swift:
func resizeImage(image: UIImage, newSize: CGSize) -> (UIImage) {
let newRect = CGRectIntegral(CGRectMake(0,0, newSize.width, newSize.height))
let imageRef = image.CGImage
UIGraphicsBeginImageContextWithOptions(newSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
let flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height)
CGContextConcatCTM(context, flipVertical)
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef)
let newImageRef = CGBitmapContextCreateImage(context) as CGImage
let newImage = UIImage(CGImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
return newImage
}
If someone is looking for Swift version, here is the Swift version of #Dan Rosenstark's accepted answer:
func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
If you retain the original aspect ratio of the image while scaling, you'll always end up with a sharp image no matter how much you scale down.
You can use the following method for scaling:
+ (UIImage *)imageWithCGImage:(CGImageRef)imageRef scale:(CGFloat)scale orientation:(UIImageOrientation)orientation
For Swift 3
func resizeImage(image: UIImage, newSize: CGSize) -> (UIImage) {
let newRect = CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height).integral
UIGraphicsBeginImageContextWithOptions(newSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
context!.interpolationQuality = CGInterpolationQuality.default
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: newSize.height)
context!.concatenate(flipVertical)
// Draw into the context; this scales the image
context?.draw(image.cgImage!, in: CGRect(x: 0.0,y: 0.0, width: newRect.width, height: newRect.height))
let newImageRef = context!.makeImage()! as CGImage
let newImage = UIImage(cgImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
return newImage
}
#YAR your solution is working properly.
There is only one thing which does not fit my requirements: The whole image is resized. I wrote a Method which did it like the photos app on iphone.
This calculates the "longer side" and cuts off the "overlay" resulting in getting much better results concerning the quality of the image.
- (UIImage *)resizeImageProportionallyIntoNewSize:(CGSize)newSize;
{
CGFloat scaleWidth = 1.0f;
CGFloat scaleHeight = 1.0f;
if (CGSizeEqualToSize(self.size, newSize) == NO) {
//calculate "the longer side"
if(self.size.width > self.size.height) {
scaleWidth = self.size.width / self.size.height;
} else {
scaleHeight = self.size.height / self.size.width;
}
}
//prepare source and target image
UIImage *sourceImage = self;
UIImage *newImage = nil;
// Now we create a context in newSize and draw the image out of the bounds of the context to get
// A proportionally scaled image by cutting of the image overlay
UIGraphicsBeginImageContext(newSize);
//Center image point so that on each egde is a little cutoff
CGRect thumbnailRect = CGRectZero;
thumbnailRect.size.width = newSize.width * scaleWidth;
thumbnailRect.size.height = newSize.height * scaleHeight;
thumbnailRect.origin.x = (int) (newSize.width - thumbnailRect.size.width) * 0.5;
thumbnailRect.origin.y = (int) (newSize.height - thumbnailRect.size.height) * 0.5;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if(newImage == nil) NSLog(#"could not scale image");
return newImage ;
}
For swift 4.2:
extension UIImage {
func resized(By coefficient:CGFloat) -> UIImage? {
guard coefficient >= 0 && coefficient <= 1 else {
print("The coefficient must be a floating point number between 0 and 1")
return nil
}
let newWidth = size.width * coefficient
let newHeight = size.height * coefficient
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
This extension should scale the image while keeping original aspect ratio. The rest of the image is cropped. (Swift 3)
extension UIImage {
func thumbnail(ofSize proposedSize: CGSize) -> UIImage? {
let scale = min(size.width/proposedSize.width, size.height/proposedSize.height)
let newSize = CGSize(width: size.width/scale, height: size.height/scale)
let newOrigin = CGPoint(x: (proposedSize.width - newSize.width)/2, y: (proposedSize.height - newSize.height)/2)
let thumbRect = CGRect(origin: newOrigin, size: newSize).integral
UIGraphicsBeginImageContextWithOptions(proposedSize, false, 0)
draw(in: thumbRect)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
}