Swift: UIGraphicsBeginImageContextWithOptions scale factor set to 0 but not applied - ios

I used to resize an image with the following code and it used to work just fine regarding the scale factor. Now with Swift 3 I can't figure out why the scale factor is not taken into account. The image is resized but the scale factor not applied. Do you know why?
let layer = self.imageview.layer
UIGraphicsBeginImageContextWithOptions(layer.bounds.size, true, 0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
print("SCALED IMAGE SIZE IS \(scaledImage!.size)")
print(scaledImage!.scale)
For example if I take a screenshot on iPhone 5 the image size will be 320*568. I used to get 640*1136 with exact same code.. What can cause the scale factor not to be applied?
When I print the scale of the image it would print 1, 2 or 3 based on the device resolution but will not be applied to the image taken from the context.

scaledImage!.size will not return the image size in pixel.
CGImageGetWidth and CGImageGetHeight returns the same size (in pixels)
That is image.size * image.scale
If you wanna test it out, at first you have to import CoreGraphics
let imageSize = scaledImage!.size //(320,568)
let imageWidthInPixel = CGImageGetWidth(scaledImage as! CGImage) //640
let imageHeightInPixel = CGImageGetHeight(scaledImage as! CGImage) //1136

Related

How to scale to desired size in pixels using af_imageAspectScaled

I'm using AlamoFireImage to crop an user profile picture before sending it to the server. Our server has some restrictions and we can't send images larger than 640x640.
I'm using the af_imageAspectScaled UIImage extension function like so:
let croppedImage = image.af_imageAspectScaled(
toFill: CGSize(
width: 320,
height: 320
)
)
I was expecting this to crop image to a 320px by 320px image. However I found out that the output image is being saved as a 640x640px image with scale 2.0. The following XCTest shows this:
class UIImageTests: XCTestCase {
func testAfImageAspectScaled() {
if let image = UIImage(
named: "ipad_mini2_photo_1.JPG",
in: Bundle(for: type(of: self)),
compatibleWith: nil
) {
print (image.scale) // prints 1.0
print (image.size) // prints (1280.0, 960.0)
let croppedImage = image.af_imageAspectScaled(
toFill: CGSize(
width: 320,
height: 320
)
)
print (croppedImage.scale) // prints 2.0
print (croppedImage.size) // prints (320.0, 320.0)
}
}
}
I'm running this on the iPhone Xr simulator on Xcode 10.2.
The original image is 1280 by 960 points, with scale 1, which would be equivalent to 1280 by 960 pixels. The cropped image is 320 by 320 points, with scale 2, which would be equivalent to 640 by 640 pixels.
Why is the scale set to 2? Can I change that? How can I generate a 320 by 320 pixels image independent on the scale and device?
Well, checking the source code for the af_imageAspectScaled method I found the following code for generating the actual scaled image:
UIGraphicsBeginImageContextWithOptions(size, af_isOpaque, 0.0)
draw(in: CGRect(origin: origin, size: scaledSize))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext() ?? self
UIGraphicsEndImageContext()
The parameter with value 0.0 on UIGraphicsBeginImageContextWithOptions tells the method to use the main screen scale factor for defining the image size.
I tried setting this to 1.0 and, when running my testcase, af_imageAspectScaled generated an image with the correct dimensions I wanted.
Here there is a table showing all the iOS devices resolutions. My app was sending appropriate sized images for all devices where the scale factor was 2.0, however several devices have scale factor 3.0. For those the app wasn't working.
Well, unfortunately it seems that if I want to use af_imageAspectScaled I have to divide the final size I want by the device's scale when setting the scaled size like so:
let scale = UIScreen.main.scale
let croppedImage = image.af_imageAspectScaled(
toFill: CGSize(
width: 320/scale,
height: 320/scale
)
)
I've sent a pull request to AlamofireImage proposing the addition of a parameter scale to the functions af_imageAspectScaled(toFill:), af_imageAspectScaled(toFit:) and af_imageScaled(to:). If they accept it, the above code should become:
// this is not valid with Alamofire 4.0.0 yet! waiting for my pull request to
// be accepted
let croppedImage = image.af_imageAspectScaled(
toFill: CGSize(
width: 320,
height: 320
),
scale: 1.0
)
// croppedImage would be a 320px by 320px image, regardless of the device type.

Strange CoreImage cropping issues encounter here

I have a strange issue where after I cropped a photo from my photo library, it cannot be displayed from the App. It gives me this error after I run this code:
self.correctedImageView.image = UIImage(ciImage: correctedImage)
[api] -[CIContext(CIRenderDestination) _startTaskToRender:toDestination:forPrepareRender:error:] The image extent and destination extent do not intersect.
Here is the code I used to crop and display. (inputImage is CIImage)
let imageSize = inputImage.extent.size
let correctedImage = inputImage
.cropped(to: textObvBox.boundingBox.scaled(to: imageSize) )
DispatchQueue.main.async {
self.correctedImageView.image = UIImage(ciImage: correctedImage)
}
More Info: Debug print the extent of the inputImage and correctedImage
Printing description of self.inputImage: CIImage: 0x1c42047a0 extent [0 0 3024 4032]>
crop [430 3955 31 32] extent=[430 3955 31 32]
affine [0 -1 1 0 0 4032] extent=[0 0 3024 4032] opaque
affine [1 0 0 -1 0 3024] extent=[0 0 4032 3024] opaque
colormatch "sRGB IEC61966-2.1"_to_workingspace extent=[0 0 4032 3024] opaque
IOSurface 0x1c4204790(501) seed:1 YCC420f 601 alpha_one extent=[0 0 4032 3024] opaque
Funny thing is that when I put a breakpoint, using Xcode, i was able to preview the cropped image properly. I'm not sure what this extent thing is for CIImage, but UIMageView doesn't like it when I assign the cropped image to it. Any idea what this extent thing does?
I ran into the same problem you describe. Due to some weird behavior in UIKit / CoreImage, I needed to convert the CIImage to a CGImage first. I noticed it only happened when I applied some filters to the CIImage, described below.
let image = /* my CIImage */
let goodImage = UIImage(ciImage: image)
uiImageView.image = goodImage // works great!
let image = /* my CIImage after applying 5-10 filters */
let badImage = UIImage(ciImage: image)
uiImageView.image = badImage // empty view!
This is how I solved it.
let ciContext = CIContext()
let cgImage = self.ciContext.createCGImage(image, from: image.extent)
uiImageView.image = UIImage(cgImage: cgImage) // works great!!!
As other commenters have stated beware creating a CIContext too often; its an expensive operation.
Extent in CIImage can be a bit of a pain, especially when working with cropping and translation filters. Effectively it allows the actual position of the new image to relate directly to the part of the old image it was taken from. If you crop the center out of an image, you'll find the extent's origin is non-zero and you'll get an issue like this. I believe the feature is intended for things such as making an art application that supports layers.
As noted above, you CAN convert the CIImage into a CGImage and from there to a UIImage, but this is slow and wasteful. It works because CGImage doesn't preserve extent, while UIImage can. However, as noted, it requires creating a CIContext which has a lot of overhead.
There is, fortunately, a better way. Simply create a CIAffineTransformFilter and populate it with an affine transform equal to the negative of the extent's origin, thus:
let transformFilter = CIFilter(name: "CIAffineTransform")!
let translate = CGAffineTransform(translationX: -image.extent.minX, y: -image.extent.minY)
let value = NSValue(cgAffineTransform: translate)
transformFilter.setValue(value, forKey: kCIInputTransformKey)
transformFilter.setValue(image, forKey: kCIInputImageKey)
let newImage = transformFilter.outputImage
Now, newImage should be identical to image but with an extent origin of zero. You can then pass this directly to UIView(ciimage:). I timed this and found it to be immensely faster than creating a CIContext and making a CGImage. For the sake of interest:
CGImage Method: 0.135 seconds.
CIFilter Method: 0.00002 seconds.
(Running on iPhone XS Max)
You can use advanced UIGraphicsImageRenderer to draw your image in iOS 10.X or later.
let img:CIImage = /* my ciimage */
let renderer = UIGraphicsImageRenderer(size: img.extent.size)
uiImageView.image = renderer.image { (context) in
UIImage(ciImage: img).draw(in: .init(origin: .zero, size: img.extent.size))
}

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

Confused about UIImage's init(data:scale:) method in Swift 3.0

Here is my purpose, I have an image width of 750 and I want to scale it to 128
Then I found an init method of UIImage called
init(data:scale:)
The next is my code
func scaleImage(image:UIImage, ToSpecificWidth targetWidth:Int) -> UIImage{
var scale = Double(targetWidth) / Double(image.size.width)
let scaledImage = UIImage(data: UIImagePNGRepresentation(image)! as Data, scale: CGFloat(scale))
return scaledImage!
}
print("originImageBeforeWidth: \(portrait?.size.width)") // output: 750
let newImage = Tools.scaleImage(image: portrait!, ToSpecificWidth: 128) // the scale is about 0.17
print("newImageWidth: \(newImage.size.width)") // output: 4394.53125
apparently the new width is too far away from my intension
I'm looking for 750 * 0.17 = 128
But I get 750 / 0.17 = 4394
then I change my scaleImage func
Here is the updated code
func scaleImage(image:UIImage, ToSpecificWidth targetWidth:Int) -> UIImage{
var scale = Double(targetWidth) / Double(image.size.width)
scale = 1/scale // new added
let scaledImage = UIImage(data: UIImagePNGRepresentation(image)! as Data, scale: CGFloat(scale))
return scaledImage!
}
print("originImageBeforeWidth: \(portrait?.size.width)") // output: 750
let newImage = Tools.scaleImage(image: portrait!, ToSpecificWidth: 128) // the scale is about 5.88
print("newImageWidth: \(newImage.size.width)") // output: 128
Which is exactly what I want, but the code scale =1/scale doesn't make any sense
What is going on here?
The init method you are trying to use is not for the purpose of resizing an image. And the scale parameter is not a resizing scale. It's the 1x, 2x, 3x scale. Essentially, the only valid values for scale are currently 1.0, 2.0, or 3.0.
While setting the scale to the inverse of what you expected gives you a size property returning your desired result, it is not at all what you should be doing.
There are proper ways to resize an image such as How to Resize image in Swift? as well as others.
According to apple document,
UIImage.scale
The scale factor to assume when interpreting the image data. Applying a scale factor of 1.0 results in an image whose size matches the pixel-based dimensions of the image. Applying a different scale factor changes the size of the image as reported by the size property.
UIImage.size
This value reflects the logical size of the image and takes the image’s current orientation into account. Multiply the size values by the value in the scale property to get the pixel dimensions of the image.
so,
real pixel size = size * scale
That's why you need to set 1/scale.
By the way,
scale only affect the size value of UIImage properly.
It mean it only affect the size for showing on screen, not changing pixel size.
If you want to resize, you can draw with scale and use
UIGraphicsGetImageFromCurrentImageContext()

Resolution Loss when generating an Image from UIImageView

Okay, sorry if the title is a little confusing. Basically I am trying get the image/subviews of the image view and combine them into a single exportable UIImage.
Here is my current code, however it has a large resolution loss.
func generateImage() -> UIImage{
UIGraphicsBeginImageContext(environmentImageView.frame.size)
var context : CGContextRef = UIGraphicsGetCurrentContext()
environmentImageView.layer.renderInContext(context)
var img : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
You have to set the scale of the context to be retina.
UIGraphicsBeginImageContextWithOptions(environmentImageView.frame.size, false, 0)
0 means to use the scale of the screen which will work for non-retina devices as well.

Resources