Resolution Loss when generating an Image from UIImageView - ios

Okay, sorry if the title is a little confusing. Basically I am trying get the image/subviews of the image view and combine them into a single exportable UIImage.
Here is my current code, however it has a large resolution loss.
func generateImage() -> UIImage{
UIGraphicsBeginImageContext(environmentImageView.frame.size)
var context : CGContextRef = UIGraphicsGetCurrentContext()
environmentImageView.layer.renderInContext(context)
var img : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}

You have to set the scale of the context to be retina.
UIGraphicsBeginImageContextWithOptions(environmentImageView.frame.size, false, 0)
0 means to use the scale of the screen which will work for non-retina devices as well.

Related

Get Visible portion Image from UIimageView in Scrollview

I have a UIImageView in a UIScrollView in which can be zoomed in and out. Now, after the user has selected the specific content to be zoomed in, I want to crop that part of image present on the scrollview and get it in the form on UIImage.
For that I am using
extension UIScrollView {
var snapshotVisibleArea: UIImage? {
UIGraphicsBeginImageContext(bounds.size)
UIGraphicsGetCurrentContext()?.translateBy(x: -contentOffset.x, y: -contentOffset.y)
layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
But when I implement this, the quality of the image get extremely degraded. Even If I use a 4K image, the final product looks like a 360p resolution.
This logic is just basic capturing of the screen content.
I know there can be a better way but I am not able to find a solution.
Any help is highly appreciated.
You can try this:
let context:CGContext = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .high
Also I'm not sure but image quality could be improve if you initialize image context with this code: UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)

GPUImage doubles image size - iOS/Swift

I am trying to convert an image into grayscale one using GPUImage. I wrote an extension to get my work done. Grayscale thing is okay. But output image has become doubled in size. In my case I need the image to be in exact size. Can someone please help me on this? Any help would be highly appreciated.
This is the extension I wrote
import UIKit
import GPUImage
extension UIImage {
public func grayscale() -> UIImage?{
var processedImage = self
print("1: "+"\(processedImage.size)")
let image = GPUImagePicture(image: processedImage)
let grayFilter = GPUImageGrayscaleFilter()
image?.addTarget(grayFilter)
grayFilter.useNextFrameForImageCapture()
image?.processImage()
processedImage = grayFilter.imageFromCurrentFramebuffer()
print("2: "+"\(processedImage.size)")
return processedImage
}
}
This is the output in console
Edit: I know the image can be resized later on. But need to know why is this happening and is there anything to do to keep the image size as it is using GPUImage.
Try to scale the image later:
if let cgImage = processedImage.cgImage {
//The scale value 2.0 here should be replaced by the original image's scale.
let scaledImage = UIImage(cgImage: cgImage, scale: 2.0, orientation: processedImage.imageOrientation)
return scaledImage
}

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

Cropping CIImage

I have a class that takes an UIImage, initializes a CIImage with it like so:
workingImage = CIImage.init(image: baseImage!)
Then the image is used to cut out 9 neighbouring squares in a 3x3 pattern out of it - in a loop:
for x in 0..<3
{
for y in 0..<3
{
croppingRect = CGRect(x: CGFloat(Double(x) * sideLength + startPointX),
y: CGFloat(Double(y) * sideLength + startPointY),
width: CGFloat(sideLength),
height: CGFloat(sideLength))
let tmpImg = (workingImage?.cropping(to: croppingRect))!
}
}
Those tmpImgs are inserted into a table and later used, but thats besides the point.
This code works on IOS 9, and on IOS 10 simulators, but not on an actual IOS 10 device. The images produced are either all empty, or one of them is like a half of what its supposed to be, with the rest being, again, empty.
Is this not how its supposed to be done in IOS 10?
The heart of the matter is that passing through CIImage is not the way to crop a UIImage. For one thing, coming back from CIImage to UIImage is a complicated business. For another, the whole round-trip is unnecessary.
How To Crop
To crop an image, make an image graphics context of the desired cropped size and call draw(at:) on the UIImage to draw it at the desired point relative to the graphics context, so that the desired portion of the image falls into the context. Now extract the resulting new image and close the context.
To demonstrate, I'll crop to one of the thirds you are trying to crop to, namely the lower right third:
let sz = baseImage.size
UIGraphicsBeginImageContextWithOptions(
CGSize(width:sz.width/3.0, height:sz.height/3.0),
false, 0)
baseImage.draw(at:CGPoint(x: -sz.width/3.0*2.0, y: -sz.height/3.0*2.0))
let tmpImg = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Original image (baseImage):
Cropped image (tmpImg):
The other sections are completely parallel.
Core Image's coordinate system mismatches with UIKit, so the rect needs to be mirrored.
So in your specific case, you want:
var ciRect = croppingRect
ciRect.origin.y = workingImage!.extent.height - ciRect.origin.y - ciRect.height
let tmpImg = workingImage!.cropped(to: ciRect)
This definitely works for iOS 10+.
In a more general case, we would make a UIImage extension that covers both possible coordinate systems, and that's way faster than draw(at:):
extension UIImage {
/// Return a new image cropped to a rectangle.
/// - parameter rect:
/// The rectangle to crop.
open func cropped(to rect: CGRect) -> UIImage {
// a UIImage is either initialized using a CGImage, a CIImage, or nothing
if let cgImage = self.cgImage {
// CGImage.cropping(to:) is magnitudes faster than UIImage.draw(at:)
if let cgCroppedImage = cgImage.cropping(to: rect) {
return UIImage(cgImage: cgCroppedImage)
} else {
return UIImage()
}
}
if let ciImage = self.ciImage {
// Core Image's coordinate system mismatch with UIKit, so rect needs to be mirrored.
var ciRect = rect
ciRect.origin.y = ciImage.extent.height - ciRect.origin.y - ciRect.height
let ciCroppedImage = ciImage.cropped(to: ciRect)
return UIImage(ciImage: ciCroppedImage)
}
return self
}
}
I've made a pod for it, so the source code is at https://github.com/Coeur/ImageEffects/blob/master/SwiftImageEffects/ImageEffects%2Bextensions.swift

Why is my CGImage 3 x the size of the UIImage

I have a function that takes a UIImage and a colour and uses it to return a UIImage in just that colour.
As part of the function the UIImage is converted to a CGImage but for some reason the CGImage is 3 times the size of the UIImage which means that the final result is 3 times the size that it should be.
Here is the function
func coloredImageNamed(name: String, color: UIColor) -> UIImage {
let startImage: UIImage = UIImage(named: name)!
print("UIImage W: \(startImage.size.width) H\(startImage.size.height)")
let maskImage: CGImage = startImage.CGImage!
print("CGImage W: \(CGImageGetWidth(maskImage)) H\(CGImageGetHeight(maskImage))")
// Make the image that the mask is applied to (Just a rectangle of the color want to use)
let colorImageSize: CGSize = CGSizeMake(startImage.size.width, startImage.size.height)
let colorFillImage: CGImage = getImageWithColor(color, size: colorImageSize).CGImage!
// Make the mask image into a mask
let mask: CGImage = CGImageMaskCreate(CGImageGetWidth(maskImage),
CGImageGetHeight(maskImage),
CGImageGetBitsPerComponent(maskImage),
CGImageGetBitsPerPixel(maskImage),
CGImageGetBytesPerRow(maskImage),
CGImageGetDataProvider(maskImage), nil, false)!
// Create the new image and convert back to UIImage, then return
let masked: CGImage! = CGImageCreateWithMask(colorFillImage, mask);
let returnImage: UIImage! = UIImage(CGImage: masked)
return returnImage
}
You can see in the first few lines I have used print() to print the size of each the UIImage and CGImage to the console. The CGImage is always 3 times the size of the UIImage... I suspect it has something to do with #2x, #3x etc?
So the question is why is this happening and what can do to get an Image out that is the same size as the one I start with?
It's because UIImage has a scale property. This mediates between pixels and points. So, for example, a UIImage created from a 180x180 pixel image, but with a scale of 3, is automatically treated as having size 60x60 points. It will report its size as 60x60, and will also look good on a 3x resolution screen where 3 pixels correspond to 1 point. And, as you rightly guess, the #3x suffix, or the corresponding location in the asset catalog, tells the system to give the UIImage a scale of 3 as it forms it.
But a CGImage does not have such a property; it's just a bitmap, the actual pixels of the image. So a CGImage formed from a UIImage created from 180x180 pixel image is 180x180 points as well.

Resources