How to create monochromatic UIImage on iOS - ios

I have a UIImage coming from server that I need to present in the UI as a monochromatic image with a given single color that can be an arbitrary as well. What's the best way to achieve it?

In my current method I am using following method that returns a monochromatic image for a given image and a color:
fileprivate func monochromaticImage(from image: UIImage, in color: UIColor) -> UIImage {
guard let img = CIImage(image: image) else {
return image
}
let color = CIColor(color: color)
guard let outputImage = CIFilter(name: "CIColorMonochrome",
withInputParameters: ["inputImage" : img,
"inputColor" : color])?.outputImage else {
return image
}
let context = CIContext()
if let cgImage = context.createCGImage(outputImage, from: outputImage.extent) {
let newImage = UIImage(cgImage: cgImage, scale: image.scale, orientation: image.imageOrientation)
return newImage
}
return image
}

Related

UIGraphicsImageRenderer mirrors image after applying filter

I'm trying to apply filters on images.
Applying the filter works great, but it mirrors the image vertically.
The bottom row of images calls the filter function after init.
The main image at the top, gets the filter applied after pressing on one at the bottom
The ciFilter is CIFilter.sepiaTone().
func applyFilter(image: UIImage) -> UIImage? {
let rect = CGRect(origin: CGPoint.zero, size: image.size)
let renderer = UIGraphicsImageRenderer(bounds: rect)
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
let image = renderer.image { context in
let ciContext = CIContext(cgContext: context.cgContext, options: nil)
if let outputImage = ciFilter.outputImage {
ciContext.draw(outputImage, in: rect, from: rect)
}
}
return image
}
And after applying the filter twice, the new image gets zoomed in.
Here are some screenshots.
You don't need to use UIGraphicsImageRenderer.
You can directly get the image from CIContext.
func applyFilter(image: UIImage) -> UIImage? {
ciFilter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
guard let ciImage = ciFilter.outputImage else {
return nil
}
let outputCGImage = CIContext().createCGImage(ciImage, from: ciImage.extent)
guard let _ = outputCGImage else { return nil }
let filteredImage = UIImage(cgImage: outputCGImage!, scale: image.scale, orientation: image.imageOrientation)
return filteredImage
}

Performance Improvements of CIFilter Chain?

In a photo app (no video), I have a number of built-in and custom Metal CIFilters chained together in a class like so (I've left out the lines to set filter parameters, other than the input image):
var colorControlsFilter = CIFilter(name: "CIColorControls")!
var highlightShadowFilter = CIFilter(name: "CIHighlightShadowAdjust")!
func filter(image data: Data) -> UIImage
{
var outputImage: CIImage?
let rawFilter = CIFilter(imageData: imageData, options: nil)
outputImage = rawFilter?.outputImage
colorControlsFilter.setValue(outputImage, forKey: kCIInputImageKey)
outputImage = colorControlsFilter.setValue.outputImage
highlightShadowFilter.setValue(outputImage, forKey: kCIInputImageKey)
outputImage = highlightShadowFilter.setValue.outputImage
...
...
if let ciImage = outputImage
{
return renderImage(ciImage: ciImage)
}
}
func renderImage(ciImage: CIImage) -> UIImage?
{
var outputImage: UIImage?
let size = ciImage.extent.size
UIGraphicsBeginImageContext(size)
if let context = UIGraphicsGetCurrentContext()
{
context.interpolationQuality = .high
context.setShouldAntialias(true)
let inputImage = UIImage(ciImage: ciImage)
inputImage.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return outputImage
}
Processing takes about a second.
Is this way of linking together output to input of the filters the most efficient? Or more generally: What performance optimisations could I do?
You should use a CIContext to render the image:
var context = CIContext() // create this once and re-use it for each image
func render(image ciImage: CIImage) -> UIImage? {
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
return cgImage.map(UIImage.init)
}
It's important to create the CIContext only once since it's expensive to create because it's holding and caching all (Metal) resources needed for rendering the image.

Swift: UIImage orientation problem when applying filter in swift

I want to convert an image to grayscale by CIPhotoEffectNoir, but after using it, the image rotates. I searched a lot but the answers can not solve my problem.
This is my code:
func grayscale(image: UIImage) -> UIImage? {
let context = CIContext(options: nil)
if let filter = CIFilter(name: "CIPhotoEffectNoir") {
filter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
if let output = filter.outputImage {
if let cgImage = context.createCGImage(output, from: output.extent) {
return UIImage(cgImage: cgImage)
}
}
}
return nil
}
before:
after:
what I want:
When you create your new UIImage instance, don't forget to use the scale and orientaion values from the original UIImage.
return UIImage(
cgImage: cgimg,
scale: image.scale,
orientation: image.imageOrientation)

CIMaskedVariableBlur creates unwanted border

I am using the following function to blur an image with depth map and noticed that the filter creates a border around the end result image. I am not sure what did I do wrongly that create this.
func blur(image: CIImage, mask: CIImage, orientation: UIImageOrientation = .up) -> UIImage? {
let invertedMask = mask.applyingFilter("CIColorInvert")
let output = image.applyingFilter("CIMaskedVariableBlur", parameters: ["inputMask" : invertedMask,"inputRadius": 15.0])
guard let cgImage = context.createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: cgImage, scale: 1.0, orientation: orientation)
}
I think you likely want to say:
func blur(image: CIImage, mask: CIImage, orientation: UIImageOrientation = .up) -> UIImage? {
let invertedMask = mask.applyingFilter("CIColorInvert")
let output = image.applyingFilter("CIMaskedVariableBlur", parameters: ["inputMask" : invertedMask,"inputRadius": 15.0])
guard let cgImage = context.createCGImage(output, from: image.extent) else {
return nil
}
return UIImage(cgImage: cgImage, scale: 1.0, orientation: orientation)
}
Where you are drawing in the extent of the original image. CIMaskedVariableBlur will set the extent to include all of the pixels sampled which will likely include pixels that you are not concerned particularly along the edges where color values are averaged with values outside the bounds of the original image.

SWIFT 3 - CGImage to PNG

I am trying to use a color mask to make a color in JPG image transparent because as I read, color mask only works with JPG.
This code work when I apply the color mask and save the image as a JPG, but as a JPG, there is no transparency so I want to transform the JPG image to a PNG image to keep the transparency but when I try to do it, the color mask doesn't work.
Am I doing something wrong or maybe this isn't the right approach.
Here is the code of the 2 functions :
func callChangeColorByTransparent(_ sender: UIButton){
var colorMasking: [CGFloat] = []
if let textLabel = sender.titleLabel?.text {
switch textLabel {
case "Remove Black":
colorMasking = [0,30,0,30,0,30]
break
case "Remove Red":
colorMasking = [180,255,0,50,0,60]
break
default:
colorMasking = [222,255,222,255,222,255]
}
}
print(colorMasking)
let newImage = changeColorByTransparent(selectedImage, colorMasking: colorMasking)
symbolImageView.image = newImage
}
func changeColorByTransparent(_ image : UIImage, colorMasking : [CGFloat]) -> UIImage {
let rawImage: CGImage = (image.cgImage)!
//let colorMasking: [CGFloat] = [222,255,222,255,222,255]
UIGraphicsBeginImageContext(image.size)
let maskedImageRef: CGImage = rawImage.copy(maskingColorComponents: colorMasking)!
if let context = UIGraphicsGetCurrentContext() {
context.draw(maskedImageRef, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
var newImage = UIImage(cgImage: maskedImageRef, scale: image.scale, orientation: image.imageOrientation)
UIGraphicsEndImageContext()
var pngImage = UIImage(data: UIImagePNGRepresentation(newImage)!, scale: 1.0)
return pngImage!
}
print("fail")
return image
}
Thank for your help.
Thanks the issue of DonMag in my other question SWIFT 3 - CGImage copy always nil, here is the code to solve this :
func saveImageWithAlpha(theImage: UIImage, destFile: URL) -> Void {
// odd but works... solution to image not saving with proper alpha channel
UIGraphicsBeginImageContext(theImage.size)
theImage.draw(at: CGPoint.zero)
let saveImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
if let img = saveImage, let data = UIImagePNGRepresentation(img) {
try? data.write(to: destFile)
}
}

Resources