Does UIImage(contentsOfFile: <String>) take into consideration screen scale? - ios

I have an image on disk and load it like so:
guard let image = UIImage(contentsOfFile: url.path!) else { return }
self.testImageView.image = image
When loading an image like this form file, will the image be drawn on screen with the correct scale factor?
I am asking because when I do this:
guard let imageData = NSData(contentsOfFile: url.path!) else { return }
guard let image = UIImage(data: imageData, scale: UIScreen.mainScreen().scale) else { return }
self.testImageView.image = image
the image looks way sharper.

As stated in the Apple Documentation on supporting High-Resolution Screens In Views which can be found in the Drawing and Printing Guide for iOS:
On devices with high-resolution screens, the imageNamed:, imageWithContentsOfFile:, and initWithContentsOfFile: methods automatically looks for a version of the requested image with the #2x modifier in its name. If it finds one, it loads that image instead. If you do not provide a high-resolution version of a given image, the image object still loads a standard-resolution image (if one exists) and scales it during drawing.

Related

Swift UIImage .jpegData() and .pngData() changes image size

I am using Swift's Vision Framework for Deep Learning and want to upload the input image to backend using REST API - for which I am converting my UIImage to MultipartFormData using jpegData() and pngData() function that swift natively offers.
I use session.sessionPreset = .vga640x480 to specify the image size in my app for processing.
I was seeing different size of image in backend - which I was able to confirm in the app because UIImage(imageData) converted from the image is of different size.
This is how I convert image to multipartData -
let multipartData = MultipartFormData()
if let imageData = self.image?.jpegData(compressionQuality: 1.0) {
multipartData.append(imageData, withName: "image", fileName: "image.jpeg", mimeType: "image/jpeg")
}
This is what I see in Xcode debugger -
The following looks intuitive, but manifests the behavior you describe, whereby one ends up with a Data representation of the image with an incorrect scale and pixel size:
let ciImage = CIImage(cvImageBuffer: pixelBuffer) // 640×480
let image = UIImage(ciImage: ciImage) // says it is 640×480 with scale of 1
guard let data = image.pngData() else { ... } // but if you extract `Data` and then recreate image from that, the size will be off by a multiple of your device’s scale
However, if you create it via a CGImage, you will get the right result:
let ciImage = CIImage(cvImageBuffer: pixelBuffer)
let ciContext = CIContext()
guard let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent) else { return }
let image = UIImage(cgImage: cgImage)
You asked:
If my image is 640×480 points with scale 2, does my deep learning model would still take the same to process as for a 1280×960 points with scale 1?
There is no difference, as far as the model goes, between 640×480pt # 2× versus 1280×960pt # 1×.
The question is whether 640×480pt # 2× is better than 640×480pt # 1×: In this case, the model will undoubtedly generate better results, though possibly slower, with higher resolution images (though at 2×, the asset is roughly four times larger/slower to upload; on 3× device, it will be roughly nine times larger).
But if you look at the larger asset generated by the direct CIImage » UIImage process, you can see that it did not really capture a 1280×960 snapshot, but rather captured 640×480 and upscaled (with some smoothing), so you really do not have a more detailed asset to deal with and is unlikely to generate better results. So, you will pay the penalty of the larger asset, but likely without any benefits.
If you need better results with larger images, I would change the preset to a higher resolution but still avoid the scale based adjustment by using the CIContext/CGImage-based snippet shared above.

CIRadialGradient reduces image size

After applying CIRadialGradient to my image it gets reduced in width by about 20%.
guard let image = bgImage.image, let cgimg = image.cgImage else {
print("imageView doesn't have an image!")
return
}
let coreImage = CIImage(cgImage:cgimg)
guard let radialMask = CIFilter(name:"CIRadialGradient") else {
return
}
guard let maskedVariableBlur = CIFilter(name:"CIMaskedVariableBlur") else {
print("CIMaskedVariableBlur does not exist")
return
}
maskedVariableBlur.setValue(coreImage, forKey: kCIInputImageKey)
maskedVariableBlur.setValue(radialMask.outputImage, forKey: "inputMask")
guard let selectivelyFocusedCIImage = maskedVariableBlur.outputImage else {
print("Setting maskedVariableBlur failed")
return
}
bgImage.image = UIImage(ciImage: selectivelyFocusedCIImage)
To clarify, bgImage is a UIImageView.
Why does this happen and how do I fix it?
Without RadialMask:
With RadialMask:
With the difference that on my physical iPhone the smaller image is aligned to the left.
I tend to explicitly state how big the image is by using a CIContext and creating a specifically sized CGImage instead of simply using UIImage(ciImage:). Try this, assuming your inputImage is called coreGraphics:
let ciCtx = CIContext()
let cgiig = ctx.createCGImage(selectivelyFocusedCIImage, from: coreImage.extent)
let uiImage = UIImage(cgImage: cgIMG!)
A few notes....
(1) I pulled this code out from an app I'm wrapping up. This is untested code (including the forced-unwrap), but the concept of what I'm doing is solid.
(2) You don't explain a lot of what you are trying to do, but when I see a variable named selectivelyFocusedCIImage I get concerned that you may be trying to use CoreImage in a more interactive way than "just" creating one image. If you want "near real-time" performance, render the CIImage in either a (deprecated as of iOS 12) GLKView or an MTKView instead of a UIImageView. The latter only uses the CPU where the two former use the GPU.
(3) Finally, a word of warning on CIContexts - they are expensive to create! Usually you can code it such that there's only one context that can be shared by everything n your app.
Look up the documentation, it's a mask that being applied to the image:
Docs: CIRadialGradient
The different sizes are caused by the kernel size of the blur filter:
The blur filter needs to sample a region around each pixel. Since there are no pixels beyond the image bounds, Core Image reduces the extend of the result image by half the kernel size (blur radius) to signal that for those pixels there is not enough information for a proper blur.
However, you can tell Core Image to treat the border pixels as extending infinitely in all directions so that the blur filter gets enough information even on the edges of the image. Afterwards you can crop the result back to the original dimension.
In your code, just change the following two lines:
maskedVariableBlur.setValue(coreImage.clampedToExtent(), forKey: kCIInputImageKey)
bgImage.image = UIImage(ciImage: selectivelyFocusedCIImage.cropped(to:coreImage.extend))

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

If a filter is applied to a PNG where height > width, it rotates the image 90 degrees. How can I efficiently prevent this?

I'm making a simple filter app. I've found that if you load an image from the camera roll that is a PNG (PNGs have no orientation data flag) and the height is greater than the width, upon applying certain distortion filters to said image it will rotate and present it self as if it were a landscape image.
I found the below technique online somewhere in the many tabs i had open and it seems to do exactly what i want. It uses the original scale and orientation of the image when it was first loaded.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.origImage.imageOrientation)
but this is the warning i get when i try to use it:
Ambiguous use of 'init(CIImage:scale:orientation:)'
Here's the entire thing I'm trying to get working:
//global variables
var image: UIImage!
var origImage: UIImage!
func setFilter(action: UIAlertAction) {
origImage = image
// make sure we have a valid image before continuing!
guard let image = self.imageView.image?.cgImage else { return }
let openGLContext = EAGLContext(api: .openGLES3)
let context = CIContext(eaglContext: openGLContext!)
let ciImage = CIImage(cgImage: image)
let currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter?.setValue(ciImage, forKey: kCIInputImageKey)
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
//the line below is the one giving me errors which i thought would work.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.image.imageOrientation)
self.imageView.image = UIImage(cgImage: context.createCGImage(newImage, from: output.extent)!)}
The filters all work, they unfortunately turn images described above by 90 degrees for the reasons I suspect.
I've tried some other methods like using an extension that checks orientation of UIimages and converting the CIimage to a Uiimage, using the extension, then trying to convert it back to a Ciimage or just load the UIimage to the imageView for output. I ran into snag after snag with that process. I started to seem really convoluted just to get certain images to their default orientation as well.
Any advice would be greatly appreciated!
EDIT: heres where I got the method I was trying: When applying a filter to a UIImage the result is upside down
I found the answer. My biggest issue was the "Ambiguous use of 'init(CIImage:scale:orientation:)' "
it turned out that Xcode was auto populating the code as 'CIImage:scale:orientation' when it should have been ciImage:scale:orientation' The very vague error left a new dev like my scratching my head for 3 days over this. (This was true for CGImage and UIImage inits as well, but my original error was with CIImage so I used that to explain.)
with that knowledge I was able to formulate the code below for my new output:
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
let outputImage = UIImage(cgImage: context.createCGImage(output, from: output.extent)!)
let imageTurned = UIImage(cgImage: outputImage.cgImage!, scale: CGFloat(1.0), orientation: origImage.imageOrientation)
centerScrollViewContents()
self.imageView.image = imageTurned
}
This code replaces the if let output in the OP.

Write UIImage to PNG; read PNG into UIImage; resulting image has twice the resolution

Following these steps I'm getting an unexpected result:
1) Write UIImage to PNG
2) Read PNG into UIImage
Result: the sizes of the UIImages (before and after PNG) are different. The UIImage created by reading from the PNG has twice the resolution as the original UIImage used to create the PNG.
Swift pseudocode follows:
var initialImage : UIImage;
// Obtain UIImage from UIImagePickerController, Images.xcassets, ...
// As an example, consider a UIImage with size = (420.0, 280.0).
println("Image size = \(initialImage.size)") // Indicates "Image size = (420.0, 280.0)"
// Write image to file
UIImagePNGRepresentation(myImage).writeToFile(imagePath, atomically: true)
// Retrieve image from file
let newImage = UIImage(contentsOfFile: imagePath)!
println("Image size = \(initialImage.size)") // Indicates "Image size = (840.0, 560.0)"!
All is well otherwise. Displaying the newly read image works great.
I'm running this connected to an iPhone 6. Any thoughts would be greatly appreciated.
gary
It's converting it for the retina display. Each point is represented by 2 pixels on the screen so the saved image is double the resolution. This is a good article to reference: http://www.paintcodeapp.com/news/iphone-6-screens-demystified

Resources