I have the following code in a playground:
import UIKit
var str = "Hello, playground"
func rescaledImage(_ image: UIImage, with newSize: CGSize) -> UIImage {
let renderer = UIGraphicsImageRenderer(size: newSize)
let rescaled = renderer.image { _ in
image.draw(in: CGRect.init(origin: CGPoint.zero, size: newSize))
}
return rescaled
}
let original = UIImage(named: "burn.jpg")!
let resized = rescaledImage(original, with: CGSize(width: 200, height: 200))
let ciImage = CIImage(image: resized)
burn.jpg is a 5000 pixel by 5000 pixel black and white jpg.
The resized image is properly 200 pixels by 200 pixels. The ciImage however, is 400 pixels by 400 pixels. In fact, no matter what I resize it to, the ciImage is always doubled.
However, if I just make ciImage out of the original:
let ciImage = CIImage(image: original)
the ciImage will be 5000 by 5000 pixels, instead of being doubled.
So what is causing this doubling? Something in the format of the resized image must be causing this, but I cannot seem to isolate it.
Note that this doubling also happens if I use UIGraphicsBeginImageContextWithOptions instead.
func imageWithImage(image:UIImage, scaledToSize newSize:CGSize) -> UIImage {
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0);
image.draw(in: CGRect(origin: CGPoint.zero, size: CGSize(width: newSize.width, height: newSize.height)))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
If you check the scale of your images, you'll notice that the original.scale is probably 1, while the resized.scale is probably 2.
You can set the scale of the renderer using UIGraphicsImageRendererFormat, and see if that helps.
func rescaledImage(_ image: UIImage, with newSize: CGSize) -> UIImage {
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let renderer = UIGraphicsImageRenderer(size: newSize, format: format)
let rescaled = renderer.image { _ in
image.draw(in: CGRect.init(origin: CGPoint.zero, size: newSize))
}
return rescaled
}
Related
func thumbImage(image: UIImage) -> UIImage {
let cgSize: CGSize = CGSize(width: 100, height: 100)
let thumb = UIGraphicsImageRenderer(size: cgSize)
return thumb.image { _ in
image.draw(in: CGRect(origin: .zero, size: cgSize))
}
}
The final image is 300x300.
I would like, not matter the iPhone screen resolution, to have the image to be 100x100 (it is a square image of course).
How modify this code to achieve this result?
(I'm open to alternate ways of achieving this)
func thumbImage(image: UIImage, pxWidth: Int, pxHeight:Int ) -> UIImage {
let cgSize: CGSize = CGSize(width: pxWidth, height: pxHeight)
let rect = CGRect(x: 0, y: 0, width: pxWidth, height: pxHeight)
UIGraphicsBeginImageContextWithOptions(cgSize, false, 1.0)
image.draw(in: rect)
let thumb = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let compressedThumb = thumb!.jpegData(compressionQuality: 0.70)
return UIImage(data: compressedThumb!)!
}
This alternative with UIGraphicsBeginImageContextWithOptions works and keeps the code as short as the initial one. (I also added some compression and conversion code).
I have a bunch of CIFilters that finally scale & crop the large image (from iPhone camera) to an 1080x1920 CIImage.
I then want to save the image as a JPG:
var outputFilter: CIFilter?
...
if let ciImage = outputFilter?.outputImage {
let outputImage = UIImage(ciImage: ciImage)
let data = outputImage?.jpegData(compressionQuality: 0.8)
...
}
The ciImage.extent is 1080x1920, outputImage.size is also 1080x1920, outputImage.scale is 1.0.
The image saved to disk however is 3x as large: 3240x5760.
What am I missing?
This will return an image based on your screen scale. If you check your screen scale it will result in 3X. What you need is to initialize your uiimage with the screen scale:
let outputImage = UIImage(ciImage: ciImage, scale: UIScreen.main.scale, orientation: .up)
To render the image you can use UIGraphicsImageRenderer:
extension CIImage {
var rendered: UIImage {
let cgImage = CIContext(options: nil).createCGImage(self, from: extent)!
let size = extent.size
let format = UIGraphicsImageRendererFormat.default()
format.opaque = false
return UIGraphicsImageRenderer(size: size, format: format).image { ctx in
var transform = CGAffineTransform(scaleX: 1, y: -1)
transform = transform.translatedBy(x: 0, y: -size.height)
ctx.cgContext.concatenate(transform)
ctx.cgContext.draw(cgImage, in: CGRect(origin: .zero, size: size))
}
}
}
I needed to do the following to both get the correct size and avoid bad rendering (as discussed in the comments of Leo Dabus' answer).
private func renderImage(ciImage: CIImage) -> UIImage?
{
var outputImage: UIImage?
UIGraphicsBeginImageContext(CGSize(width: 1080, height: 1920))
if let context = UIGraphicsGetCurrentContext()
{
context.interpolationQuality = .high
context.setShouldAntialias(true)
let inputImage = UIImage(ciImage: ciImage)
inputImage.draw(in: CGRect(x: 0, y: 0, width: 1080, height: 1920))
outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return outputImage
}
Goal: Crop a UIImage (that starts with a scale property of 2.0)
I perform the following code:
let croppedCGImage = originalUIImage.cgImage!.cropping(to: cropRect)
let croppedUIImage = UIImage(cgImage: croppedCGImage!)
This code works, however the result, croppedUIImage, has an incorrect scale property of 1.0.
I've tried specifying the scale when creating the final image:
let croppedUIImage = UIImage(cgImage: croppedCGImage!, scale: 2.0, orientation: .up)
This yields the correct scale, but it cuts the size dimensions in half incorrectly.
What should I do here?
(*Note: the scale property on the UIImage is important because I later save the image with UIImagePNGRepresentation(_ image: UIImage) which is affected by the scale property)
Edit:
I got the following to work. Unfortunately it's just substantially slower than the CGImage cropping function.
extension UIImage {
func cropping(to rect: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(rect.size, false, self.scale)
self.draw(in: CGRect(x: -rect.origin.x, y: -rect.origin.y, width: self.size.width, height: self.size.height))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return croppedImage
}
}
Try this:
extension UIImage {
func imageByCropToRect(rect:CGRect, scale:Bool) -> UIImage {
var rect = rect
var scaleFactor: CGFloat = 1.0
if scale {
scaleFactor = self.scale
rect.origin.x *= scaleFactor
rect.origin.y *= scaleFactor
rect.size.width *= scaleFactor
rect.size.height *= scaleFactor
}
var image: UIImage? = nil;
if rect.size.width > 0 && rect.size.height > 0 {
let imageRef = self.cgImage!.cropping(to: rect)
image = UIImage(cgImage: imageRef!, scale: scaleFactor, orientation: self.imageOrientation)
}
return image!
}
}
Use this Extension :-
extension UIImage {
func cropping(to quality: CGInterpolationQuality, rect: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(rect.size, false, self.scale)
let context = UIGraphicsGetCurrentContext()! as CGContext
context.interpolationQuality = quality
let drawRect : CGRect = CGRect(x: -rect.origin.x, y: -rect.origin.y, width: self.size.width, height: self.size.height)
context.clip(to: CGRect(x:0, y:0, width: rect.size.width, height: rect.size.height))
self.draw(in: drawRect)
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return croppedImage
}
}
I'm using the ImageHelper Pod for iOS and tvOS and it's working perfectly and might also fit your needs.
It brings a lot UIImage Extensions such as:
Crop and Resize
// Crops an image to a new rect
func crop(bounds: CGRect) -> UIImage?
// Crops an image to a centered square
func cropToSquare() -> UIImage? {
// Resizes an image
func resize(size:CGSize, contentMode: UIImageContentMode = .ScaleToFill) -> UIImage?
Screen Density
// To create an image that is Retina aware, use the screen scale as a multiplier for your size. You should also use this technique for padding or borders.
let width = 140 * UIScreen.mainScreen().scale
let height = 140 * UIScreen.mainScreen().scale
let image = UIImage(named: "myImage")?.resize(CGSize(width: width, height: height))
Also stuff like: Image Effects
// Applies a light blur effect to the image
func applyLightEffect() -> UIImage?
// Applies a extra light blur effect to the image
func applyExtraLightEffect() -> UIImage?
// Applies a dark blur effect to the image
func applyDarkEffect() -> UIImage?
// Applies a color tint to an image
func applyTintEffect(tintColor: UIColor) -> UIImage?
// Applies a blur to an image based on the specified radius, tint color saturation and mask image
func applyBlur(blurRadius:CGFloat, tintColor:UIColor?, saturationDeltaFactor:CGFloat, maskImage:UIImage? = nil) -> UIImage?
-(UIImage *)getNeedImageFrom:(UIImage*)image cropRect:(CGRect)rect
{
CGImageRef subImage = CGImageCreateWithImageInRect(image.CGImage, rect);
UIImage *croppedImage = [UIImage imageWithCGImage:subImage];
CGImageRelease(subImage);
return croppedImage;
}
calling
UIImage *imageSample=image;
CGRect rectMake1=CGRectMake(0, 0,imageSample.size.width*1/4, imageSample.size.height);
UIImage *img1=[[JRGlobal sharedInstance] getNeedImageFrom:imageSample cropRect:rectMake1];
I'm building an image manager class to deal with resizing, cropping, etc with UIImages. I'm using CGContext because of it's speed comparable to UIGraphicsContext with JPEG's from the benchmarks seen here http://nshipster.com/image-resizing/ .
This is my manager class:
func resizeImage(exportedWidth width: Int, exportedHeight height: Int, originalImage image: UIImage) -> UIImage? {
let cgImage = image.CGImage
let width = width
let height = height
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)
CGContextSetInterpolationQuality(context, .High)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)
let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }
return scaledImage
}
Nothing really too crazy, but I'm having trouble when I add it to a subclassed UITextView. I have tried making the image size doubled then compressing it down with an UIImageView to no avail. This code has it properly showing up with the correct size but it looks as if the image pixel to screen pixel ratio is 1:1 instead of the ideal 2:1 for the retina screen on my iPhone SE.
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) {
if let pickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage {
let textViewWidth: CGFloat = self.view.frame.size.width - 130
let percentResize = textViewWidth / pickedImage.size.width
let toBeExportedHeight = pickedImage.size.height * percentResize
let resizedImage = ImageManipulationManager.sharedInstance.resizeImage(exportedWidth: Int(textViewWidth),exportedHeight: Int(toBeExportedHeight), originalImage: pickedImage)
let attachment = NSTextAttachment()
attachment.image = resizedImage
let attString = NSAttributedString(attachment: attachment)
textView.textStorage.insertAttributedString(attString, atIndex: textView.selectedRange.location)
textInputbar.layoutIfNeeded()
textView.becomeFirstResponder()
print(textView.text.characters.count)
}
dismissViewControllerAnimated(true, completion: nil)
}
An image for a double-resolution screen needs to be double-resolution. That means it should be twice as wide and twice as high as the actual size you want, with a scale of 2 attached to the UIImage when you derive it from the CGImage.
The image becomes blurry once applying roundImage:
Making a UIImage to a circle form
extension UIImage
{
func roundImage() -> UIImage
{
let newImage = self.copy() as! UIImage
let cornerRadius = self.size.height/2
UIGraphicsBeginImageContextWithOptions(self.size, false, 1.0)
let bounds = CGRect(origin: CGPointZero, size: self.size)
UIBezierPath(roundedRect: bounds, cornerRadius: cornerRadius).addClip()
newImage.drawInRect(bounds)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
Why are you using bezierpath? Just set cornerradius for uiimageview.
If your image is larger than the imageview then you have to resize your image to your imageview size and then set cornerradius for that uiimageview.
It will work. Works for me
Replace the following line
UIGraphicsBeginImageContextWithOptions(self.size, false, 1.0)
with
UIGraphicsBeginImageContextWithOptions(self.size, view.opaque , 0.0)
try this one
let image = UIImageView(frame: CGRectMake(0, 0, 100, 100))
I recommend that you can use AlamofireImage (https://github.com/Alamofire/AlamofireImage)
It's very easily to make rounded image or circle image without losing quality.
just like this:
let image = UIImage(named: "unicorn")!
let radius: CGFloat = 20.0
let roundedImage = image.af_imageWithRoundedCornerRadius(radius)
let circularImage = image.af_imageRoundedIntoCircle()
Voila!
Your issue is that you are using scale 1, which is the lowest "quality".
Setting the scale to 0 will use the device scale, which just uses the image as is.
A side note: Functions inside a class that return a new instance of that class can be implemented as class functions. This makes it very clear what the function does. It does not manipulate the existing image. It returns a new one.
Since you were talking about circles, I also corrected your code so it will now make a circle of any image and crop it. You might want to center this.
extension UIImage {
class func roundImage(image : UIImage) -> UIImage? {
// copy
guard let newImage = image.copy() as? UIImage else {
return nil
}
// start context
UIGraphicsBeginImageContextWithOptions(newImage.size, false, 0.0)
// bounds
let cornerRadius = newImage.size.height / 2
let minDim = min(newImage.size.height, newImage.size.width)
let bounds = CGRect(origin: CGPointZero, size: CGSize(width: minDim, height: minDim))
UIBezierPath(roundedRect: bounds, cornerRadius: cornerRadius).addClip()
// new image
newImage.drawInRect(bounds)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// crop
let maybeCrop = UIImage.crop(finalImage, cropRect: bounds)
return maybeCrop
}
class func crop(image: UIImage, cropRect : CGRect) -> UIImage? {
guard let imgRef = CGImageCreateWithImageInRect(image.CGImage, cropRect) else {
return nil
}
return UIImage(CGImage: imgRef)
}
}