I have a class that takes an UIImage, initializes a CIImage with it like so:
workingImage = CIImage.init(image: baseImage!)
Then the image is used to cut out 9 neighbouring squares in a 3x3 pattern out of it - in a loop:
for x in 0..<3
{
for y in 0..<3
{
croppingRect = CGRect(x: CGFloat(Double(x) * sideLength + startPointX),
y: CGFloat(Double(y) * sideLength + startPointY),
width: CGFloat(sideLength),
height: CGFloat(sideLength))
let tmpImg = (workingImage?.cropping(to: croppingRect))!
}
}
Those tmpImgs are inserted into a table and later used, but thats besides the point.
This code works on IOS 9, and on IOS 10 simulators, but not on an actual IOS 10 device. The images produced are either all empty, or one of them is like a half of what its supposed to be, with the rest being, again, empty.
Is this not how its supposed to be done in IOS 10?
The heart of the matter is that passing through CIImage is not the way to crop a UIImage. For one thing, coming back from CIImage to UIImage is a complicated business. For another, the whole round-trip is unnecessary.
How To Crop
To crop an image, make an image graphics context of the desired cropped size and call draw(at:) on the UIImage to draw it at the desired point relative to the graphics context, so that the desired portion of the image falls into the context. Now extract the resulting new image and close the context.
To demonstrate, I'll crop to one of the thirds you are trying to crop to, namely the lower right third:
let sz = baseImage.size
UIGraphicsBeginImageContextWithOptions(
CGSize(width:sz.width/3.0, height:sz.height/3.0),
false, 0)
baseImage.draw(at:CGPoint(x: -sz.width/3.0*2.0, y: -sz.height/3.0*2.0))
let tmpImg = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Original image (baseImage):
Cropped image (tmpImg):
The other sections are completely parallel.
Core Image's coordinate system mismatches with UIKit, so the rect needs to be mirrored.
So in your specific case, you want:
var ciRect = croppingRect
ciRect.origin.y = workingImage!.extent.height - ciRect.origin.y - ciRect.height
let tmpImg = workingImage!.cropped(to: ciRect)
This definitely works for iOS 10+.
In a more general case, we would make a UIImage extension that covers both possible coordinate systems, and that's way faster than draw(at:):
extension UIImage {
/// Return a new image cropped to a rectangle.
/// - parameter rect:
/// The rectangle to crop.
open func cropped(to rect: CGRect) -> UIImage {
// a UIImage is either initialized using a CGImage, a CIImage, or nothing
if let cgImage = self.cgImage {
// CGImage.cropping(to:) is magnitudes faster than UIImage.draw(at:)
if let cgCroppedImage = cgImage.cropping(to: rect) {
return UIImage(cgImage: cgCroppedImage)
} else {
return UIImage()
}
}
if let ciImage = self.ciImage {
// Core Image's coordinate system mismatch with UIKit, so rect needs to be mirrored.
var ciRect = rect
ciRect.origin.y = ciImage.extent.height - ciRect.origin.y - ciRect.height
let ciCroppedImage = ciImage.cropped(to: ciRect)
return UIImage(ciImage: ciCroppedImage)
}
return self
}
}
I've made a pod for it, so the source code is at https://github.com/Coeur/ImageEffects/blob/master/SwiftImageEffects/ImageEffects%2Bextensions.swift
Related
I am working on an MTKView-backed paint program which can replay painting history via an array of MTLTextures that store keyframes. I am having an issue in which sometimes the content of these MTLTextures is scrambled.
As an example, say I want to store a section of the drawing below as a keyframe:
During playback, sometimes the drawing will display exactly as intended, but sometimes, it will display like this:
Note the distorted portion of the picture. (The undistorted portion constitutes a static background image that's not part of the keyframe in question)
I describe the way I Create individual MTLTextures from the MTKView's currentDrawable below. Because of color depth issues I won't go into, the process may seem a little round-about.
I first get a CGImage of the subsection of the screen that constitutes a keyframe.
I use that CGImage to create an MTLTexture tied to the MTKView's device.
I store that MTLTexture into a MTLTextureStructure that stores the MTLTexture and the keyframe's bounding-box (which I'll need later)
Lastly, I store in an array of MTLTextureStructures (keyframeMetalArray). During playback, when I hit a keyframe, I get it from this keyframeMetalArray.
The associated code is outlined below.
let keyframeCGImage = weakSelf!.canvasMetalViewPainting.mtlTextureToCGImage(bbox: keyframeBbox, copyMode: copyTextureMode.textureKeyframe) // convert from MetalTexture to CGImage
let keyframeMTLTexture = weakSelf!.canvasMetalViewPainting.CGImageToMTLTexture(cgImage: keyframeCGImage)
let keyframeMTLTextureStruc = mtlTextureStructure(texture: keyframeMTLTexture, bbox: keyframeBbox, strokeType: brushTypeMode.brush)
weakSelf!.keyframeMetalArray.append(keyframeMTLTextureStruc)
Without providing specifics about how each conversion is happening, I wonder if, from an architecture design point, I'm overlooking something that is corrupting my data stored in the keyframeMetalArray. It may be unwise to try to store these MTLTextures in volatile arrays, but I don't know that for a fact. I just figured using MTLTextures would be the quickest way to update content.
By the way, when I swap out arrays of keyframes to arrays of UIImage.pngData, I have no display issues, but it's a lot slower. On the plus side, it tells me that the initial capture from currentDrawable to keyframeCGImage is working just fine.
Any thoughts would be appreciated.
p.s. adding a bit of detail based on the feedback:
mtlTextureToCGImage:
func mtlTextureToCGImage(bbox: CGRect, copyMode: copyTextureMode) -> CGImage {
let kciOptions = [convertFromCIContextOption(CIContextOption.outputPremultiplied): true,
convertFromCIContextOption(CIContextOption.useSoftwareRenderer): false] as [String : Any]
let bboxStrokeScaledFlippedY = CGRect(x: (bbox.origin.x * self.viewContentScaleFactor), y: ((self.viewBounds.height - bbox.origin.y - bbox.height) * self.viewContentScaleFactor), width: (bbox.width * self.viewContentScaleFactor), height: (bbox.height * self.viewContentScaleFactor))
let strokeCIImage = CIImage(mtlTexture: metalDrawableTextureKeyframe,
options: convertToOptionalCIImageOptionDictionary(kciOptions))!.oriented(CGImagePropertyOrientation.downMirrored)
let imageCropCG = cicontext.createCGImage(strokeCIImage, from: bboxStrokeScaledFlippedY, format: CIFormat.RGBA8, colorSpace: colorSpaceGenericRGBLinear)
cicontext.clearCaches()
return imageCropCG!
} // end of func mtlTextureToCGImage(bbox: CGRect)
CGImageToMTLTexture:
func CGImageToMTLTexture (cgImage: CGImage) -> MTLTexture {
// Note that we forego the more direct method of creating stampTexture:
//let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
// because MTKTextureLoader seems to be doing additional processing which messes with the resulting texture/colorspace
let width = Int(cgImage.width)
let height = Int(cgImage.height)
let bytesPerPixel = 4
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm,
width: width,
height: height,
mipmapped: false)
texDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.shaderRead.rawValue)
texDescriptor.storageMode = .shared
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else {
return brushTextureSquare // return SOMETHING
}
let dstData: CFData = (cgImage.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
let region = MTLRegionMake2D(0, 0, width, height)
print ("[MetalViewPainting]: w= \(width) | h= \(height) region = \(region.size)")
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
return stampTexture
} // end of func CGImageToMTLTexture (cgImage: CGImage)
The type of distortion looks like a bytes-per-row alignment issue between CGImage and MTLTexture. You're probably only seeing this issue when your image is a certain size that falls outside of the bytes-per-row alignment requirement of your MTLDevice. If you really need to store the texture as a CGImage, ensure that you are using the bytesPerRow value of the CGImage when copying back to the texture.
After applying a CIFilter to a photo captured with the camera the image taken shrinks and repositions itself.
I was thinking that if I was able to get the original images size and orientation that it would scale accordingly and pin the imageview to the corners of the screen. However nothing is changed with this approach and not aware of a way I can properly get the image to scale to the full size of the screen.
func applyBloom() -> UIImage {
let ciImage = CIImage(image: image) // image is from UIImageView
let filteredImage = ciImage?.applyingFilter("CIBloom",
withInputParameters: [ kCIInputRadiusKey: 8,
kCIInputIntensityKey: 1.00 ])
let originalScale = image.scale
let originalOrientation = image.imageOrientation
if let image = filteredImage {
let image = UIImage(ciImage: image, scale: originalScale, orientation: originalOrientation)
return image
}
return self.image
}
Picture Description:
Photo Captured and screenshot of the image with empty spacing being a result of an image shrink.
Try something like this. Replace:
func applyBloom() -> UIImage {
let ciInputImage = CIImage(image: image) // image is from UIImageView
let ciOutputImage = ciInputImage?.applyingFilter("CIBloom",
withInputParameters: [kCIInputRadiusKey: 8, kCIInputIntensityKey: 1.00 ])
let context = CIContext()
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent)
return UIImage(cgImage: cgOutputImage!)
}
I remained various variables to help explain what's happening.
Obviously, depending on your code, some tweaking to optionals and unwrapping may be needed.
What's happening is this - take the filtered/output CIImage, and using a CIContext, write a CGImage the size of the input CIImage.
Be aware that a CIContext is expensive. If you already have one created, you should probably use it.
Pretty much, a UIImage size is the same as a CIImage extent. (I say pretty much because some generated CIImages can have infinite extents.)
Depending on your specific needs (and your UIImageView), you may want to use the output CIImage extent instead. Usually though, they are the same.
Last, a suggestion. If you are trying to use a CIFilter to show "near real-time" changes to an image (like a photo editor), consider the major performance improvements you'll get using CIImages and a GLKView over UIImages and a UIImageView. The former uses a devices GPU instead of the CPU.
This could also happen if a CIFilter outputs an image with dimensions different than the input image (e.g. with CIPixellate)
In which case, simply tell the CIContext to render the image in a smaller rectangle:
let cgOutputImage = context.createCGImage(ciOutputImage, from: ciInputImage.extent.insetBy(dx: 20, dy: 20))
There are several questions on SO asking how to render a UIView into a PDF context, but they all use view.layer.renderInContext(pdfContext), which results in a 72 DPI image (and one that looks terrible when printed). What I'm looking for is a technique to somehow get the UIView to render at something like 300 DPI.
In the end, I was able to take hints from several prior posts and put together a solution. I'm posting this since it took me a long time to get working, and I really hope to save someone else time and effort doing the same.
This solution uses two basic techniques:
Render the UIView into a scaled bitmap context to produce a large image
Draw the image into a PDF Context which has been scaled down, so that the drawn image has a high resolution
Build your view:
let v = UIView()
... // then add subviews, constraints, etc
Create the PDF Context:
UIGraphicsBeginPDFContextToData(data, docRect, stats.headerDict) // zero == (612 by 792 points)
defer { UIGraphicsEndPDFContext() }
UIGraphicsBeginPDFPage();
guard let pdfContext = UIGraphicsGetCurrentContext() else { return nil }
// I tried 300.0/72.0 but was not happy with the results
let rescale: CGFloat = 4 // 288 DPI rendering of VIew
// You need to change the scale factor on all subviews, not just the top view!
// This is a vital step, and there may be other types of views that need to be excluded
Then create a large bitmap of the image with an expanded scale:
func scaler(v: UIView) {
if !v.isKindOfClass(UIStackView.self) {
v.contentScaleFactor = 8
}
for sv in v.subviews {
scaler(sv)
}
}
scaler(v)
// Create a large Image by rendering the scaled view
let bigSize = CGSize(width: v.frame.size.width*rescale, height: v.frame.size.height*rescale)
UIGraphicsBeginImageContextWithOptions(bigSize, true, 1)
let context = UIGraphicsGetCurrentContext()!
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, CGRect(origin: CGPoint(x: 0, y: 0), size: bigSize))
// Must increase the transform scale
CGContextScaleCTM(context, rescale, rescale)
v.layer.renderInContext(context)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Now we have a large image with each point representing one pixel.
To get it drawn into the PDF at high resolution, we need to scale the PDF down while drawing the image at its large size:
CGContextSaveGState(pdfContext)
CGContextTranslateCTM(pdfContext, v.frame.origin.x, v.frame.origin.y) // where the view should be shown
CGContextScaleCTM(pdfContext, 1/rescale, 1/rescale)
let frame = CGRect(origin: CGPoint(x: 0, y: 0), size: bigSize)
image.drawInRect(frame)
CGContextRestoreGState(pdfContext)
... // Continue with adding other items
You can see that the left "S" contained in the cream colored bitmap looks pretty nice compared to a "S" drawn but an attributed string:
When the same PDF is viewed by a simple rendering of the PDF without all the scaling, this is what you would see:
Okay, sorry if the title is a little confusing. Basically I am trying get the image/subviews of the image view and combine them into a single exportable UIImage.
Here is my current code, however it has a large resolution loss.
func generateImage() -> UIImage{
UIGraphicsBeginImageContext(environmentImageView.frame.size)
var context : CGContextRef = UIGraphicsGetCurrentContext()
environmentImageView.layer.renderInContext(context)
var img : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
You have to set the scale of the context to be retina.
UIGraphicsBeginImageContextWithOptions(environmentImageView.frame.size, false, 0)
0 means to use the scale of the screen which will work for non-retina devices as well.
I have tested the vignette filter in Core Image, while good - I am wondering whether anyone has implemented color vignette effect (instead of black edges, it soften the edges) by chaining through various Core Image filters for iOS? Or points me to a tutorial to do this?
Based on the answer below, this is my code - but does not seem to have much effect.
func colorVignette(image:UIImage) -> UIImage {
let cimage = CIImage(image:image)
let whiteImage = CIImage(image:colorImage(UIColor.whiteColor(), size:image.size))
var output1 = CIFilter(name:"CIGaussianBlur", withInputParameters:[kCIInputImageKey:cimage, kCIInputRadiusKey:5]).outputImage
var output2 = CIFilter(name:"CIVignette", withInputParameters:[kCIInputImageKey:whiteImage, kCIInputIntensityKey:vignette, kCIInputRadiusKey:1]).outputImage
var output = CIFilter(name:"CIBlendWithMask", withInputParameters:[kCIInputImageKey:cimage, kCIInputMaskImageKey:output2, kCIInputBackgroundImageKey:output1]).outputImage
return UIImage(CGImage:ctx.createCGImage(output, fromRect:cimage.extent()))
}
func colorImage(color:UIColor, size:CGSize) -> UIImage {
UIGraphicsBeginImageContextWithOptions(size, false, 0)
color.setFill()
UIRectFill(CGRect(x:0, y:0, width:size.width, height:size.height))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
You could create a filter by chaining together a Gaussian Blur, a Vignette, a Blend With Mask and the original image. First blur the input image with a CIGaussianBlur. Next, apply the CIVignette filter to a solid white image of the same size. Finally, mix the original image with the blurred image using the CIBlendWithMask filter.