GPUImage doubles image size - iOS/Swift - ios

I am trying to convert an image into grayscale one using GPUImage. I wrote an extension to get my work done. Grayscale thing is okay. But output image has become doubled in size. In my case I need the image to be in exact size. Can someone please help me on this? Any help would be highly appreciated.
This is the extension I wrote
import UIKit
import GPUImage
extension UIImage {
public func grayscale() -> UIImage?{
var processedImage = self
print("1: "+"\(processedImage.size)")
let image = GPUImagePicture(image: processedImage)
let grayFilter = GPUImageGrayscaleFilter()
image?.addTarget(grayFilter)
grayFilter.useNextFrameForImageCapture()
image?.processImage()
processedImage = grayFilter.imageFromCurrentFramebuffer()
print("2: "+"\(processedImage.size)")
return processedImage
}
}
This is the output in console
Edit: I know the image can be resized later on. But need to know why is this happening and is there anything to do to keep the image size as it is using GPUImage.

Try to scale the image later:
if let cgImage = processedImage.cgImage {
//The scale value 2.0 here should be replaced by the original image's scale.
let scaledImage = UIImage(cgImage: cgImage, scale: 2.0, orientation: processedImage.imageOrientation)
return scaledImage
}

Related

Need some help converting cvpixelbuffer data to a jpeg/png in iOS

So I'm trying to get a jpeg/png representation of the grayscale depth maps that are typically used in iOS image depth examples. The depth data is stored in each jpeg as aux data. I've followed some tutorials and I have no problem rendering this gray scale data to the screen, but I can find no way to actually save it as a jpeg/png representation. I'm pretty much using this code: https://www.raywenderlich.com/168312/image-depth-maps-tutorial-ios-getting-started
The depth data is put into a cvpixelbuffer and manipulated accordingly. I believe it's in the format kCVPixelFormatType_DisparityFloat32.
While I'm able to see this data represented accordingly on the screen, I'm unable to use UIImagePNGRepresentation or UIImageJPGRepresentation. Sure, I could manually capture a screenshot, but that's not really ideal.
I have a suspicion that the cvpixelbuffer data format is not compatible with these UIImage functions and that's why I can't get them to spit out an image.
Does anyone have any suggestions?
// CVPixelBuffer to UIImage
let ciImageDepth = CIImage(cvPixelBuffer: cvPixelBufferDepth)
let contextDepth:CIContext = CIContext.init(options: nil)
let cgImageDepth:CGImage = contextDepth.createCGImage(ciImageDepth, from: ciImageDepth.extent)!
let uiImageDepth:UIImage = UIImage(cgImage: cgImageDepth, scale: 1, orientation: UIImage.Orientation.up)
// Save UIImage to Photos Album
UIImageWriteToSavedPhotosAlbum(uiImageDepth, nil, nil, nil)
I figured it out. I had to convert to a CGImage first. Ended up doing cvPixelBuffer to CIImage to CGImage to UIImage.
Posting a swift code sample in case anyone wants to use it.
let ciimage = CIImage(cvPixelBuffer: depthBuffer) // depth cvPixelBuffer
let depthUIImage = UIImage(ciImage: ciimage)

Applying CIFilter to UIImage results in resized and repositioned image

After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))

If a filter is applied to a PNG where height > width, it rotates the image 90 degrees. How can I efficiently prevent this?

I'm making a simple filter app. I've found that if you load an image from the camera roll that is a PNG (PNGs have no orientation data flag) and the height is greater than the width, upon applying certain distortion filters to said image it will rotate and present it self as if it were a landscape image.
I found the below technique online somewhere in the many tabs i had open and it seems to do exactly what i want. It uses the original scale and orientation of the image when it was first loaded.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.origImage.imageOrientation)
but this is the warning i get when i try to use it:
Ambiguous use of 'init(CIImage:scale:orientation:)'
Here's the entire thing I'm trying to get working:
//global variables
var image: UIImage!
var origImage: UIImage!
func setFilter(action: UIAlertAction) {
origImage = image
// make sure we have a valid image before continuing!
guard let image = self.imageView.image?.cgImage else { return }
let openGLContext = EAGLContext(api: .openGLES3)
let context = CIContext(eaglContext: openGLContext!)
let ciImage = CIImage(cgImage: image)
let currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter?.setValue(ciImage, forKey: kCIInputImageKey)
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
//the line below is the one giving me errors which i thought would work.
let newImage = UIImage(CIImage:(output), scale: 1.0, orientation: self.image.imageOrientation)
self.imageView.image = UIImage(cgImage: context.createCGImage(newImage, from: output.extent)!)}
The filters all work, they unfortunately turn images described above by 90 degrees for the reasons I suspect.
I've tried some other methods like using an extension that checks orientation of UIimages and converting the CIimage to a Uiimage, using the extension, then trying to convert it back to a Ciimage or just load the UIimage to the imageView for output. I ran into snag after snag with that process. I started to seem really convoluted just to get certain images to their default orientation as well.
Any advice would be greatly appreciated!
EDIT: heres where I got the method I was trying: When applying a filter to a UIImage the result is upside down
I found the answer. My biggest issue was the "Ambiguous use of 'init(CIImage:scale:orientation:)' "
it turned out that Xcode was auto populating the code as 'CIImage:scale:orientation' when it should have been ciImage:scale:orientation' The very vague error left a new dev like my scratching my head for 3 days over this. (This was true for CGImage and UIImage inits as well, but my original error was with CIImage so I used that to explain.)
with that knowledge I was able to formulate the code below for my new output:
if let output = currentFilter?.value(forKey: kCIOutputImageKey) as? CIImage{
let outputImage = UIImage(cgImage: context.createCGImage(output, from: output.extent)!)
let imageTurned = UIImage(cgImage: outputImage.cgImage!, scale: CGFloat(1.0), orientation: origImage.imageOrientation)
centerScrollViewContents()
self.imageView.image = imageTurned
}
This code replaces the if let output in the OP.

Cropping CIImage

I have a class that takes an UIImage, initializes a CIImage with it like so:
workingImage = CIImage.init(image: baseImage!)
Then the image is used to cut out 9 neighbouring squares in a 3x3 pattern out of it - in a loop:
for x in 0..<3
{
for y in 0..<3
{
croppingRect = CGRect(x: CGFloat(Double(x) * sideLength + startPointX),
y: CGFloat(Double(y) * sideLength + startPointY),
width: CGFloat(sideLength),
height: CGFloat(sideLength))
let tmpImg = (workingImage?.cropping(to: croppingRect))!
}
}
Those tmpImgs are inserted into a table and later used, but thats besides the point.
This code works on IOS 9, and on IOS 10 simulators, but not on an actual IOS 10 device. The images produced are either all empty, or one of them is like a half of what its supposed to be, with the rest being, again, empty.
Is this not how its supposed to be done in IOS 10?
The heart of the matter is that passing through CIImage is not the way to crop a UIImage. For one thing, coming back from CIImage to UIImage is a complicated business. For another, the whole round-trip is unnecessary.
How To Crop
To crop an image, make an image graphics context of the desired cropped size and call draw(at:) on the UIImage to draw it at the desired point relative to the graphics context, so that the desired portion of the image falls into the context. Now extract the resulting new image and close the context.
To demonstrate, I'll crop to one of the thirds you are trying to crop to, namely the lower right third:
let sz = baseImage.size
UIGraphicsBeginImageContextWithOptions(
CGSize(width:sz.width/3.0, height:sz.height/3.0),
false, 0)
baseImage.draw(at:CGPoint(x: -sz.width/3.0*2.0, y: -sz.height/3.0*2.0))
let tmpImg = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Original image (baseImage):
Cropped image (tmpImg):
The other sections are completely parallel.
Core Image's coordinate system mismatches with UIKit, so the rect needs to be mirrored.
So in your specific case, you want:
var ciRect = croppingRect
ciRect.origin.y = workingImage!.extent.height - ciRect.origin.y - ciRect.height
let tmpImg = workingImage!.cropped(to: ciRect)
This definitely works for iOS 10+.
In a more general case, we would make a UIImage extension that covers both possible coordinate systems, and that's way faster than draw(at:):
extension UIImage {
/// Return a new image cropped to a rectangle.
/// - parameter rect:
/// The rectangle to crop.
open func cropped(to rect: CGRect) -> UIImage {
// a UIImage is either initialized using a CGImage, a CIImage, or nothing
if let cgImage = self.cgImage {
// CGImage.cropping(to:) is magnitudes faster than UIImage.draw(at:)
if let cgCroppedImage = cgImage.cropping(to: rect) {
return UIImage(cgImage: cgCroppedImage)
} else {
return UIImage()
}
}
if let ciImage = self.ciImage {
// Core Image's coordinate system mismatch with UIKit, so rect needs to be mirrored.
var ciRect = rect
ciRect.origin.y = ciImage.extent.height - ciRect.origin.y - ciRect.height
let ciCroppedImage = ciImage.cropped(to: ciRect)
return UIImage(ciImage: ciCroppedImage)
}
return self
}
}
I've made a pod for it, so the source code is at https://github.com/Coeur/ImageEffects/blob/master/SwiftImageEffects/ImageEffects%2Bextensions.swift

Resolution Loss when generating an Image from UIImageView

Okay, sorry if the title is a little confusing. Basically I am trying get the image/subviews of the image view and combine them into a single exportable UIImage.
Here is my current code, however it has a large resolution loss.
func generateImage() -> UIImage{
UIGraphicsBeginImageContext(environmentImageView.frame.size)
var context : CGContextRef = UIGraphicsGetCurrentContext()
environmentImageView.layer.renderInContext(context)
var img : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
You have to set the scale of the context to be retina.
UIGraphicsBeginImageContextWithOptions(environmentImageView.frame.size, false, 0)
0 means to use the scale of the screen which will work for non-retina devices as well.

Resources