Write UIImage to PNG; read PNG into UIImage; resulting image has twice the resolution - ios

Following these steps I'm getting an unexpected result:
1) Write UIImage to PNG
2) Read PNG into UIImage
Result: the sizes of the UIImages (before and after PNG) are different. The UIImage created by reading from the PNG has twice the resolution as the original UIImage used to create the PNG.
Swift pseudocode follows:
var initialImage : UIImage;
// Obtain UIImage from UIImagePickerController, Images.xcassets, ...
// As an example, consider a UIImage with size = (420.0, 280.0).
println("Image size = \(initialImage.size)") // Indicates "Image size = (420.0, 280.0)"
// Write image to file
UIImagePNGRepresentation(myImage).writeToFile(imagePath, atomically: true)
// Retrieve image from file
let newImage = UIImage(contentsOfFile: imagePath)!
println("Image size = \(initialImage.size)") // Indicates "Image size = (840.0, 560.0)"!
All is well otherwise. Displaying the newly read image works great.
I'm running this connected to an iPhone 6. Any thoughts would be greatly appreciated.
gary

It's converting it for the retina display. Each point is represented by 2 pixels on the screen so the saved image is double the resolution. This is a good article to reference: http://www.paintcodeapp.com/news/iphone-6-screens-demystified

Related

Image size is resized when convert it from data in swift 3

I want to save an image in database. Therefore I convert it to Data. However during these steps the width and height of the image will change. It is increased in size.
// Original Image Size
print("Original Image Size : \(capturedImage.size)") // Displays (320.0, 427.0)
// Convert to Data
var imageData: Data?
imageData = UIImagePNGRepresentation(capturedImage)
// Store imageData into Db.
// Convert it back
m_CarImgVw.image = UIImage(data: damageImage!.imageData!, scale: 1.0)
print("m_CarImgVw Image Size : \(m_CarImgVw.image.size)") // Displays (640.0, 854.0)
I do not want the imagesize to increase!
If it’s originally an image from your assets, it’s probably #2x, which means the size in pixels (real size) is double the size in pts (displayed size). So the image size isn’t actually increasing, it was 640x854 before and after the transform. It’s just that before the OS automatically scaled it because it was named #2x.
To use the original image scale you can replace 1.0 with capturedImage.scale.
Your problem is in this line:
m_CarImgVw.image = UIImage(data: damageImage!.imageData!, scale: 1.0)
Can you see it?
Hint: It's in scale: 1.0.
It looks like your original image was Retina (or #2x), so it had scale 2.0.
So you should either put your original image scale (damageImage.scale) there, or if you're presenting image on the screen you should use UIScreen's scale.

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

Why is my CGImage 3 x the size of the UIImage

I have a function that takes a UIImage and a colour and uses it to return a UIImage in just that colour.
As part of the function the UIImage is converted to a CGImage but for some reason the CGImage is 3 times the size of the UIImage which means that the final result is 3 times the size that it should be.
Here is the function
func coloredImageNamed(name: String, color: UIColor) -> UIImage {
let startImage: UIImage = UIImage(named: name)!
print("UIImage W: \(startImage.size.width) H\(startImage.size.height)")
let maskImage: CGImage = startImage.CGImage!
print("CGImage W: \(CGImageGetWidth(maskImage)) H\(CGImageGetHeight(maskImage))")
// Make the image that the mask is applied to (Just a rectangle of the color want to use)
let colorImageSize: CGSize = CGSizeMake(startImage.size.width, startImage.size.height)
let colorFillImage: CGImage = getImageWithColor(color, size: colorImageSize).CGImage!
// Make the mask image into a mask
let mask: CGImage = CGImageMaskCreate(CGImageGetWidth(maskImage),
CGImageGetHeight(maskImage),
CGImageGetBitsPerComponent(maskImage),
CGImageGetBitsPerPixel(maskImage),
CGImageGetBytesPerRow(maskImage),
CGImageGetDataProvider(maskImage), nil, false)!
// Create the new image and convert back to UIImage, then return
let masked: CGImage! = CGImageCreateWithMask(colorFillImage, mask);
let returnImage: UIImage! = UIImage(CGImage: masked)
return returnImage
}
You can see in the first few lines I have used print() to print the size of each the UIImage and CGImage to the console. The CGImage is always 3 times the size of the UIImage... I suspect it has something to do with #2x, #3x etc?
So the question is why is this happening and what can do to get an Image out that is the same size as the one I start with?
It's because UIImage has a scale property. This mediates between pixels and points. So, for example, a UIImage created from a 180x180 pixel image, but with a scale of 3, is automatically treated as having size 60x60 points. It will report its size as 60x60, and will also look good on a 3x resolution screen where 3 pixels correspond to 1 point. And, as you rightly guess, the #3x suffix, or the corresponding location in the asset catalog, tells the system to give the UIImage a scale of 3 as it forms it.
But a CGImage does not have such a property; it's just a bitmap, the actual pixels of the image. So a CGImage formed from a UIImage created from 180x180 pixel image is 180x180 points as well.

Does UIImage(contentsOfFile: <String>) take into consideration screen scale?

I have an image on disk and load it like so:
guard let image = UIImage(contentsOfFile: url.path!) else { return }
self.testImageView.image = image
When loading an image like this form file, will the image be drawn on screen with the correct scale factor?
I am asking because when I do this:
guard let imageData = NSData(contentsOfFile: url.path!) else { return }
guard let image = UIImage(data: imageData, scale: UIScreen.mainScreen().scale) else { return }
self.testImageView.image = image
the image looks way sharper.
As stated in the Apple Documentation on supporting High-Resolution Screens In Views which can be found in the Drawing and Printing Guide for iOS:
On devices with high-resolution screens, the imageNamed:, imageWithContentsOfFile:, and initWithContentsOfFile: methods automatically looks for a version of the requested image with the #2x modifier in its name. If it finds one, it loads that image instead. If you do not provide a high-resolution version of a given image, the image object still loads a standard-resolution image (if one exists) and scales it during drawing.

Reducing the quality of a UIImage (compressing it)

Hi I am trying to compress a UIImage my code is like so:
var data = UIImageJPEGRepresentation(image, 1.0)
print("Old image size is \(data?.length) bytes")
data = UIImageJPEGRepresentation(image, 0.7)
print("New image size is \(data?.length) bytes")
let optimizedImage = UIImage(data: data!, scale: 1.0)
The print out is:
Old image size is Optional(5951798) bytes
New image size is Optional(1416792) bytes
Later on in the app I upload the UIImage to a webserver (it's converted to NSData first). When I check the size it's the original 5951798 bytes. What could I be doing wrong here?
A UIImage provides a bitmap (uncompressed) version of the image which can be used for rendering the image to screen. The JPEG data of an image can be compressed by reducing the quality (the colour variance in effect) of the image, but this is only in the JPEG data. Once the compressed JPEG is unpacked into a UIImage it again requires the full bitmap size (which is based on the colour format and the image dimensions).
Basically, keep the best quality image you can for display and compress just before upload.
Send this data to server data = UIImageJPEGRepresentation(image, 0.7), instead of sending optimizedimage to server.
You already compressed the image to data. So it will be repeating process(resetting to original size).

Resources