I am creating UIImage from current drawable texture as follows.
func createImageFromCurrentDrawable() ->(UIImage){
let context = CIContext()
let texture = metalView.currentDrawable!.texture
let kciOptions = [kCIContextWorkingColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!,
kCIContextOutputPremultiplied: true,
kCIContextUseSoftwareRenderer: false] as [String : Any]
let cImg = CIImage(mtlTexture: texture, options: kciOptions)!
let cgImg = context.createCGImage(cImg, from: cImg.extent)!
let uiImg = UIImage(cgImage: cgImg)
return uiImg
}
but it adds alpha value to UIImage which is not appearing in the texture.Is there any solution to get rid of alpha?
here is captured texture image.
uiImage created from texture.
Related
I'm trying to apply filters on images.
Applying the filter works great, but it mirrors the image vertically.
The bottom row of images calls the filter function after init.
The main image at the top, gets the filter applied after pressing on one at the bottom
The ciFilter is CIFilter.sepiaTone().
func applyFilter(image: UIImage) -> UIImage? {
let rect = CGRect(origin: CGPoint.zero, size: image.size)
let renderer = UIGraphicsImageRenderer(bounds: rect)
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
let image = renderer.image { context in
let ciContext = CIContext(cgContext: context.cgContext, options: nil)
if let outputImage = ciFilter.outputImage {
ciContext.draw(outputImage, in: rect, from: rect)
}
}
return image
}
And after applying the filter twice, the new image gets zoomed in.
Here are some screenshots.
You don't need to use UIGraphicsImageRenderer.
You can directly get the image from CIContext.
func applyFilter(image: UIImage) -> UIImage? {
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
guard let ciImage = ciFilter.outputImage else {
return nil
}
let outputCGImage = CIContext().createCGImage(ciImage, from: ciImage.extent)
guard let _ = outputCGImage else { return nil }
let filteredImage = UIImage(cgImage: outputCGImage!, scale: image.scale, orientation: image.imageOrientation)
return filteredImage
}
Is there an option to replicate func masking(_ mask: CGImage) -> CGImage? from Core Graphics using CoreImage and one of CIFilter? I've tried CIBlendWithMask and CIBlendWithAlphaMask without success. Most important thing is that I need to preserve alpha channel, so I want image to be masked in such way, that if mask is black -> show Image, if mask is white -> transparent.
My masking code:
extension UIImage {
func masked(by mask: UIImage) -> UIImage {
guard let maskRef = mask.cgImage,
let selfRef = cgImage,
let dataProvider = maskRef.dataProvider,
let mask = CGImage(
maskWidth: maskRef.width,
height: maskRef.height,
bitsPerComponent: maskRef.bitsPerComponent,
bitsPerPixel: maskRef.bitsPerPixel,
bytesPerRow: maskRef.bytesPerRow,
provider: dataProvider,
decode: nil,
shouldInterpolate: false),
let masked = selfRef.masking(mask) else {
fatalError("couldnt create mask!")
}
let maskedImage = UIImage(cgImage: masked)
return maskedImage
}
}
I found solution. Key is to use image that has grayscale format, rgb and other will not work. Resizing step can be removed, if image and maks have the same size. Doable as extension, can be done as subclass of CIImage too. Enjoy:)
Of course It can be modified as extension to UIImage, CIImage, CIFilter, how You like it.
extension CGImage {
func masked(by cgMask: CGImage) -> CIImage {
let selfCI = CIImage(cgImage: self)
let maskCI = CIImage(cgImage: cgMask)
let maskFilter = CIFilter(name: "CIMaskToAlpha")
maskFilter?.setValue(maskCI, forKey: "inputImage")
let scaleFilter = CIFilter(name: "CILanczosScaleTransform")
scaleFilter?.setValue(maskFilter?.outputImage, forKey: "inputImage")
scaleFilter?.setValue(selfCI.extent.height / maskCI.extent.height, forKey: "inputScale")
let filter: CIFilter! = CIFilter(name: "CIBlendWithAlphaMask")
filter.setValue(selfCI, forKey: "inputBackgroundImage")
let maskOutput = scaleFilter?.outputImage
filter.setValue(maskOutput, forKey: "inputMaskImage")
let outputImage = filter.outputImage!
return outputImage
}
}
You need to create your own custom CIFilter. The tutorials:
Apple Docs
Apple Docs 2
Raywenderlich.com Tutorial
This is not trivial, but it pays off when you learn how to do it :)
I'm building a camera app that captures a photo in the BGRA format, and applies a Core Image filter on it before saving it to the Photos app. On the iPhone 7 Plus, the input photo is in the Display P3 color space, but the output is in the sRGB color space:
How do I prevent this from happening?
Here's my code:
let sampleBuffer: CMSampleBuffer = ...
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let metadata = CMCopyDictionaryOfAttachments(nil, self, kCMAttachmentMode_ShouldPropagate)!
let ciImage = CIImage(cvImageBuffer: pixelBuffer,
options:[kCIImageProperties: metadata])
NSLog("\(ciImage.colorSpace)")
let context = CIContext()
let data = context.jpegRepresentation(of: ciImage,
colorSpace: ciImage.colorSpace!,
options: [:])!
// Save this using PHPhotoLibrary.
This prints:
Optional(<CGColorSpace 0x1c40a8a60> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3))
(In my actual code, I apply a filter to the CIImage, which creates another CIImage, which I save. But I can reproduce this problem even with the original CIImage, so I've eliminated the filter.)
How do I apply a Core Image filter to a P3 image and save it as a P3 image, not sRGB?
Notes:
(1) This is on iPhone 7 Plus running iOS 11.
(2) I'm using the wide camera, not tele, dual or front.
(3) If I ask AVFoundation to give me a JPEG-encoded image rather than BGRA, and save it without involving Core Image, this problem doesn't occur — the color space isn't reduced to sRGB.
(4) I tried using kCIImageColorSpace, but it made no difference:
let p3 = CGColorSpace(name: CGColorSpace.displayP3)!
let ciImage = CIImage(
cvImageBuffer: pixelBuffer,
options:[kCIImageProperties: metadata,
kCIImageColorSpace: p3])
(5) I tried using kCIContextOutputColorSpace in addition to the above, as an argument when creating the CIContext, but it again made no difference.
(6) The code that takes a Data and saves it to PHPhotoLibrary is not the problem, since it works in case (2) above.
let context = CIContext(options: [kCIContextOutputColorSpace: CGColorSpace.p3])
How do I apply a Core Image filter to a P3 image and save it as a P3 image, not sRGB?
I've had the same issue and I think this may be a bug with context.jpegRepresentation(..).
I've had more success using ImageIO to create the JPEG data, as shown in the createJPEGData function below. For example:
let eaglContext = EAGLContext(api: .openGLES2)
let options = [kCIContextWorkingColorSpace: CGColorSpace(name: CGColorSpace.extendedSRGB)!,
kCIContextOutputPremultiplied: true,
kCIContextUseSoftwareRenderer: false] as [String : Any]
let ciContext = CIContext(eaglContext: eaglContext, options: options)
let colorSpace = CGColorSpace(name: CGColorSpace.displayP3)!
guard let imageData = createJPEGData(from: image,
jpegQuality: 0.9,
outputColorSpace: colorSpace,
context: ciContext) else {
return
}
PHPhotoLibrary.shared().performChanges({ () -> Void in
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: .photo,
data: imageData,
options: nil)
}, completionHandler: { (success: Bool, error : Error?) -> Void in
// handle errors, etc
})
func createJPEGData(from image: CIImage,
jpegQuality: Float,
outputColorSpace: CGColorSpace,
context: CIContext) -> Data? {
let jpegData: CFMutableData = CFDataCreateMutable(nil, 0)
if let destination = CGImageDestinationCreateWithData(jpegData, kUTTypeJPEG, 1, nil) {
if let cgImage = context.createCGImage(image,
from: image.extent,
format: kCIFormatRGBA8,
colorSpace: outputColorSpace) {
CGImageDestinationAddImage(destination, cgImage, image.properties as CFDictionary?)
if CGImageDestinationFinalize(destination) {
return jpegData as Data
}
}
}
}
I want to blur the whole screen of my iOS app, and I can't use UIBlurEffect because I want to be able to control the blurriness. So I'm trying to use CIGaussianBlur, but I'm having trouble with the edges of the screen.
I'm taking a screenshot of the screen, and then running it through a CIFilter with CIGaussianBlur, converting the CIImage back to UIImage, and adding the new blurred image on top of the screen.
Here's my code:
let layer = UIApplication.sharedApplication().keyWindow?.layer
UIGraphicsBeginImageContext(view.frame.size)
layer!.renderInContext(UIGraphicsGetCurrentContext()!)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let blurRadius = 5
var ciimage: CIImage = CIImage(image: screenshot)!
var filter: CIFilter = CIFilter(name:"CIGaussianBlur")!
filter.setDefaults()
filter.setValue(ciimage, forKey: kCIInputImageKey)
filter.setValue(blurRadius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
let result = filter.valueForKey(kCIOutputImageKey) as! CIImage!
let cgImage = ciContext.createCGImage(result, fromRect: view.frame)
let finalImage = UIImage(CGImage: cgImage)
let blurImageView = UIImageView(frame: view.frame)
blurImageView.image = finalImage
blurImageView.sizeToFit()
blurImageView.contentMode = .ScaleAspectFit
blurImageView.center = view.center
view.addSubview(blurImageView)
Here is what I see:
It looks almost right except the edges. It seems that the blurrisness is taking off from the blur radius to the edge. I tried playing with the context size but couldn't seem to make it work.
How can I make the blur go all the way to the edges?
It is happening because the gaussian blur filter samples pixels outside the edges of the image. But because there are no pixels, you get this weird artefact. You can use "CIAffineClamp" filter to "extend" your image infinitely in all directions.
Please see this answer https://stackoverflow.com/a/18138742/762779
I tried running your code with chained 'CIAffineClamp-> CIGaussianBlur' filters and got good results.
let layer = UIApplication.sharedApplication().keyWindow?.layer
UIGraphicsBeginImageContext(view.frame.size)
layer!.renderInContext(UIGraphicsGetCurrentContext()!)
let screenshot = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let blurRadius = 5
let ciimage: CIImage = CIImage(image: screenshot)!
// Added "CIAffineClamp" filter
let affineClampFilter = CIFilter(name: "CIAffineClamp")!
affineClampFilter.setDefaults()
affineClampFilter.setValue(ciimage, forKey: kCIInputImageKey)
let resultClamp = affineClampFilter.valueForKey(kCIOutputImageKey)
// resultClamp is used as input for "CIGaussianBlur" filter
let filter: CIFilter = CIFilter(name:"CIGaussianBlur")!
filter.setDefaults()
filter.setValue(resultClamp, forKey: kCIInputImageKey)
filter.setValue(blurRadius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
let result = filter.valueForKey(kCIOutputImageKey) as! CIImage!
let cgImage = ciContext.createCGImage(result, fromRect: ciimage.extent) // changed to ciiimage.extend
let finalImage = UIImage(CGImage: cgImage)
let blurImageView = UIImageView(frame: view.frame)
blurImageView.image = finalImage
blurImageView.sizeToFit()
blurImageView.contentMode = .ScaleAspectFit
blurImageView.center = view.center
view.addSubview(blurImageView)
I'm attempting to set the image property of a UIImageView to an image I'm blurring with CoreImage. The code works perfectly with an unfiltered image, but when I set the background image to the filtered image, contentMode appears to stop working for the UIImageView -- instead of aspect filling, the image becomes vertically stretched. In addition to setting contentMode in code, I also set it on the storyboard but the result was the same.
I'm using Swift 2 / Xcode 7.
func updateBackgroundImage(image: UIImage) {
backgroundImage.contentMode = .ScaleAspectFill
backgroundImage.layer.masksToBounds = true
backgroundImage.image = blurImage(image)
}
func blurImage(image: UIImage) -> UIImage {
let imageToBlur = CIImage(image: image)!
let blurfilter = CIFilter(name: "CIGaussianBlur")!
blurfilter.setValue(10, forKey: kCIInputRadiusKey)
blurfilter.setValue(imageToBlur, forKey: "inputImage")
let resultImage = blurfilter.valueForKey("outputImage") as! CIImage
let croppedImage: CIImage = resultImage.imageByCroppingToRect(CGRectMake(0, 0, imageToBlur.extent.size.width, imageToBlur.extent.size.height))
let blurredImage = UIImage(CIImage: croppedImage)
return blurredImage
}
Why is filtering with CIImage causing my image to ignore contentMode and how do I fix the issue?
Solution is to replace your line:
let blurredImage = UIImage(CIImage: croppedImage)
with these 2 lines:
let context = CIContext(options: nil)
let blurredImage = UIImage (CGImage: context.createCGImage(croppedImage, fromRect: croppedImage.extent))
So your full blurImage function would look like this:
func blurImage(image: UIImage) -> UIImage {
let imageToBlur = CIImage(image: image)!
let blurfilter = CIFilter(name: "CIGaussianBlur")!
blurfilter.setValue(10, forKey: kCIInputRadiusKey)
blurfilter.setValue(imageToBlur, forKey: "inputImage")
let resultImage = blurfilter.valueForKey("outputImage") as! CIImage
let croppedImage: CIImage = resultImage.imageByCroppingToRect(CGRectMake(0, 0, imageToBlur.extent.size.width, imageToBlur.extent.size.height))
let context = CIContext(options: nil)
let blurredImage = UIImage (CGImage: context.createCGImage(croppedImage, fromRect: croppedImage.extent))
return blurredImage
}