I am using the following function to blur an image with depth map and noticed that the filter creates a border around the end result image. I am not sure what did I do wrongly that create this.
func blur(image: CIImage, mask: CIImage, orientation: UIImageOrientation = .up) -> UIImage? {
let invertedMask = mask.applyingFilter("CIColorInvert")
let output = image.applyingFilter("CIMaskedVariableBlur", parameters: ["inputMask" : invertedMask,"inputRadius": 15.0])
guard let cgImage = context.createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: cgImage, scale: 1.0, orientation: orientation)
}
I think you likely want to say:
func blur(image: CIImage, mask: CIImage, orientation: UIImageOrientation = .up) -> UIImage? {
let invertedMask = mask.applyingFilter("CIColorInvert")
let output = image.applyingFilter("CIMaskedVariableBlur", parameters: ["inputMask" : invertedMask,"inputRadius": 15.0])
guard let cgImage = context.createCGImage(output, from: image.extent) else {
return nil
}
return UIImage(cgImage: cgImage, scale: 1.0, orientation: orientation)
}
Where you are drawing in the extent of the original image. CIMaskedVariableBlur will set the extent to include all of the pixels sampled which will likely include pixels that you are not concerned particularly along the edges where color values are averaged with values outside the bounds of the original image.
Related
I'm trying to apply filters on images.
Applying the filter works great, but it mirrors the image vertically.
The bottom row of images calls the filter function after init.
The main image at the top, gets the filter applied after pressing on one at the bottom
The ciFilter is CIFilter.sepiaTone().
func applyFilter(image: UIImage) -> UIImage? {
let rect = CGRect(origin: CGPoint.zero, size: image.size)
let renderer = UIGraphicsImageRenderer(bounds: rect)
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
let image = renderer.image { context in
let ciContext = CIContext(cgContext: context.cgContext, options: nil)
if let outputImage = ciFilter.outputImage {
ciContext.draw(outputImage, in: rect, from: rect)
}
}
return image
}
And after applying the filter twice, the new image gets zoomed in.
Here are some screenshots.
You don't need to use UIGraphicsImageRenderer.
You can directly get the image from CIContext.
func applyFilter(image: UIImage) -> UIImage? {
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
guard let ciImage = ciFilter.outputImage else {
return nil
}
let outputCGImage = CIContext().createCGImage(ciImage, from: ciImage.extent)
guard let _ = outputCGImage else { return nil }
let filteredImage = UIImage(cgImage: outputCGImage!, scale: image.scale, orientation: image.imageOrientation)
return filteredImage
}
I want to convert an image to grayscale by CIPhotoEffectNoir, but after using it, the image rotates. I searched a lot but the answers can not solve my problem.
This is my code:
func grayscale(image: UIImage) -> UIImage? {
let context = CIContext(options: nil)
if let filter = CIFilter(name: "CIPhotoEffectNoir") {
filter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
if let output = filter.outputImage {
if let cgImage = context.createCGImage(output, from: output.extent) {
return UIImage(cgImage: cgImage)
}
}
}
return nil
}
before:
after:
what I want:
When you create your new UIImage instance, don't forget to use the scale and orientaion values from the original UIImage.
return UIImage(
cgImage: cgimg,
scale: image.scale,
orientation: image.imageOrientation)
I am creating UIImage from current drawable texture as follows.
func createImageFromCurrentDrawable() ->(UIImage){
let context = CIContext()
let texture = metalView.currentDrawable!.texture
let kciOptions = [kCIContextWorkingColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!,
kCIContextOutputPremultiplied: true,
kCIContextUseSoftwareRenderer: false] as [String : Any]
let cImg = CIImage(mtlTexture: texture, options: kciOptions)!
let cgImg = context.createCGImage(cImg, from: cImg.extent)!
let uiImg = UIImage(cgImage: cgImg)
return uiImg
}
but it adds alpha value to UIImage which is not appearing in the texture.Is there any solution to get rid of alpha?
here is captured texture image.
uiImage created from texture.
I have a UIImage coming from server that I need to present in the UI as a monochromatic image with a given single color that can be an arbitrary as well. What's the best way to achieve it?
In my current method I am using following method that returns a monochromatic image for a given image and a color:
fileprivate func monochromaticImage(from image: UIImage, in color: UIColor) -> UIImage {
guard let img = CIImage(image: image) else {
return image
}
let color = CIColor(color: color)
guard let outputImage = CIFilter(name: "CIColorMonochrome",
withInputParameters: ["inputImage" : img,
"inputColor" : color])?.outputImage else {
return image
}
let context = CIContext()
if let cgImage = context.createCGImage(outputImage, from: outputImage.extent) {
let newImage = UIImage(cgImage: cgImage, scale: image.scale, orientation: image.imageOrientation)
return newImage
}
return image
}
I am trying to use a color mask to make a color in JPG image transparent because as I read, color mask only works with JPG.
This code work when I apply the color mask and save the image as a JPG, but as a JPG, there is no transparency so I want to transform the JPG image to a PNG image to keep the transparency but when I try to do it, the color mask doesn't work.
Am I doing something wrong or maybe this isn't the right approach.
Here is the code of the 2 functions :
func callChangeColorByTransparent(_ sender: UIButton){
var colorMasking: [CGFloat] = []
if let textLabel = sender.titleLabel?.text {
switch textLabel {
case "Remove Black":
colorMasking = [0,30,0,30,0,30]
break
case "Remove Red":
colorMasking = [180,255,0,50,0,60]
break
default:
colorMasking = [222,255,222,255,222,255]
}
}
print(colorMasking)
let newImage = changeColorByTransparent(selectedImage, colorMasking: colorMasking)
symbolImageView.image = newImage
}
func changeColorByTransparent(_ image : UIImage, colorMasking : [CGFloat]) -> UIImage {
let rawImage: CGImage = (image.cgImage)!
//let colorMasking: [CGFloat] = [222,255,222,255,222,255]
UIGraphicsBeginImageContext(image.size)
let maskedImageRef: CGImage = rawImage.copy(maskingColorComponents: colorMasking)!
if let context = UIGraphicsGetCurrentContext() {
context.draw(maskedImageRef, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
var newImage = UIImage(cgImage: maskedImageRef, scale: image.scale, orientation: image.imageOrientation)
UIGraphicsEndImageContext()
var pngImage = UIImage(data: UIImagePNGRepresentation(newImage)!, scale: 1.0)
return pngImage!
}
print("fail")
return image
}
Thank for your help.
Thanks the issue of DonMag in my other question SWIFT 3 - CGImage copy always nil, here is the code to solve this :
func saveImageWithAlpha(theImage: UIImage, destFile: URL) -> Void {
// odd but works... solution to image not saving with proper alpha channel
UIGraphicsBeginImageContext(theImage.size)
theImage.draw(at: CGPoint.zero)
let saveImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let img = saveImage, let data = UIImagePNGRepresentation(img) {
try? data.write(to: destFile)
}
}