Is it possible to apply Image processing Filters on raw SVG image for editing apps in iOS? - ios

Image or color properties filter like contrast, exposure, vibrance, tint etc can be achieved well in raster image using CIFilter or GPUImage Filter in iOS.
Is there any way out to apply those basic filters in SVG image ?
I can saturate, brighten the SVG using the Color class of Macaw library this way...
let shape = svgContent as? Shape {
if let shapeOriginalColor = allNodeColors[shape.id] as? Color {
let shiftedColor = shapeOriginalColor.colorToUIColor().saturated(amount: saturation).lighter(amount: brightness)
shape.fill = Color.convert(from: shiftedColor)
}
but seems like others filter can’t be achieved as there is no prominent support for SVG in iOS !

Related

Find and crop largest interior bounding box of image

I made an optical hardware that I can get stereo images from and I'm developing a helper application for this hardware. With this equipment, I shoot an object from 3 different angles. I fold the photo into 3 different image variables. This is what photos become when I correct distortions caused by perspective with CIPerspectiveTransform. There are redundant areas you see in the images and I do not use these areas.
Perspective corrected image: https://i.imgur.com/ACJgaIy.gif
I focus the images by dragging and after focusing I try to get the intersection areas. I can get the intersection areas of 3 images of different sizes and shapes with the CISourceInCompositing filter. However, the resulting images appear in irregular formats. Due to the proportional processes I use in focusing, images also contain transparent areas. You can download and test this image. https://i.imgur.com/uo8Srvv.png
Composited image: https://i.imgur.com/OY3owts.png
Composited animated image: https://i.imgur.com/M8JOdxR.gif
func intersectImages(inputImage: UIImage, backgroundImage:UIImage) -> UIImage {
if let currentFilter = CIFilter(name: "CISourceInCompositing") {
let inputImageCi = CIImage.init(image: inputImage)
let backgroundImageCi = CIImage.init(image: backgroundImage)
currentFilter.setValue(inputImageCi,forKey: "inputImage")
currentFilter.setValue(backgroundImageCi,forKey:"inputBackgroundImage")
let context = CIContext.init()
if let outputImage = currentFilter.outputImage {
if let extent = backgroundImageCi?.extent {
if let cgOutputImage = context.createCGImage(outputImage, from: extent){
return UIImage.init(cgImage: cgOutputImage)
}
}
}
}
return UIImage.init()
}
The problem I'm stuck with is: Is it possible to extract the images as rectangles while first getting these intersection areas or after the intersection operations? I couldn't come up with any solution. I'm trying to get the green framed photo I shared as a final.
Target image: https://i.imgur.com/18htpjm.png
Target image animated https://i.imgur.com/fMcElGy.gif

Achieving erase/restore drawing on UIImage in Swift

I'm trying to make a simple image eraser tool, where the user can erase and restore as drawing into an image, just like in this image:
After many attempts and testing, I have achieved the sufficient "erase" functionality with the following code on the UI side:
// Drawing code - on user touch
// `currentPath` is a `UIBezierPath` property of the containing class.
guard let image = pickedImage else { return }
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0)
if let context = UIGraphicsGetCurrentContext() {
mainImageView.layer.render(in: context)
context.addPath(currentPath.cgPath)
context.setBlendMode(.clear)
context.setLineWidth(translatedBrushWidth)
context.setLineCap(.round)
context.setLineJoin(.round)
context.setStrokeColor(UIColor.clear.cgColor)
context.strokePath()
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
imageView.image = capturedImage
}
UIGraphicsEndImageContext()
And upon user touch-up I am applying a scale transform to currentPath to render the image with the cutout part in full size to preserve UI performance.
What I'm trying to figure out now is how to approach the "restore" functionality. Essentially, the user should draw on the erased parts to reveal the original image.
I've tried looking at CGContextClipToMask but I'm not sure how to approach the implementation.
I've also looked at other approaches to achieving this "erase/restore" effect before rendering the actual images, such as masking a CAShapeLayer over the image but also in this approach restoring becomes a problem.
Any help will be greatly appreciated, as well as alternative approaches to erase and restore with a path on the UI-level and rendering level.
Thank you!
Yes, I would recommend adding a CALayer to your image's layer as a mask.
You can either make the mask layer a CAShapeLayer and draw geometric shapes into it, or use a simple CALayer as a mask, where the contents property of the mask layer is a CGImage. You'd then draw opaque pixels into the mask to reveal the image contents, or transparent pixels to "erase" the corresponding image pixels.
This approach is hardware accelerated and quite fast.
Handling undo/redo of eraser functions would require you to collect changes to your mask layer as well as the previous state of the mask.
Edit:
I created a small demo app on Github that shows how to use a CGImage as a mask on an image view
Here is the ReadMe file from that project:
MaskableImageView
This project demonstrates how to use a CALayer to mask a UIView.
It defines a custom subclass of UIImageView, MaskableView.
The MaskableView class has a property maskLayer that contains a CALayer.
MaskableView defines a didSet method on its bounds property so that when the view's bounds change, it resizes the mask layer to match the size of the image view.
The MaskableView has a method installSampleMask which builds an image the same size as the image view, mostly filled with opaque black, but with a small rectangle in the center filled with black at an alpha of 0.7. The translucent center rectangle causes the image view to become partly transparent and show the view underneath.
The demo app installs a couple of subviews into the MaskableView, a sample image of Scampers, one of my dogs, and a UILabel. It also installs an image of a checkerboard under the MaskableView so that you can see the translucent parts more easily.
The MaskableView has properties circleRadius, maskDrawingAlpha, and drawingAction that it uses to let the user erase/un-erase the image by tapping on the view to update the mask.
The MaskableView attaches a UIPanGestureRecognizer and a UITapGestureRecognizer to itself, with an action of gestureRecognizerUpdate. The gestureRecognizerUpdate method takes the tap/drag location from the gesture recognizer and uses it to draw a circle onto the image mask that either decreases the image mask's alpha (to partly erase pixels) or increase the image mask's alpha (to make those pixels more opaque.)
The MaskableView's mask drawing is crude, and only meant for demonstration purposes. It draws a series of discrete circles intstead of rendering a path into the mask based on the user's drag gesture. A better solution would be to connect the points from the gesture recognizer and use them to render a smoothed curve into the mask.
The app's screen looks like this:
Edit #2:
If you want to export the resulting image to a file that preserves the transparency, you can convert the CGImage to a UIImage (Using the init(cgImage:) initializer) and then use the UIImage function
func pngData() -> Data?
to convert the image to PNG data. That function returns nil if it is unable to convert the image to PNG data.
If it succeeds, you can then save the data to a file with a .png extension.
I updated the sample project to include the ability to save the resulting image to disk.
First I added an image computed property to the MaskableView. That looks like this:
public var image: UIImage? {
guard let renderer = renderer else { return nil}
let result = renderer.image {
context in
return layer.render(in: context.cgContext)
}
return result
}
Then I added a save button to the view controller that fetches the image from the MaskableView and saves it to the app's Documents directory:
#IBAction func handleSaveButton(_ sender: UIButton) {
print("In handleSaveButton")
if let image = maskableView.image,
let pngData = image.pngData(){
print(image.description)
let imageURL = getDocumentsDirectory().appendingPathComponent("image.png", isDirectory: false)
do {
try pngData.write(to: imageURL)
print("Wrote png to \(imageURL.path)")
}
catch {
print("Error writing file to \(imageURL.path)")
}
}
}
You could also save the image to the user's camera roll. It's been a while since I've done that so I'd have to dig up the steps for that.

Apply Core Image Filter (CIBumpDistortion) to only one part of an image + change radius of selection and intensity of CIFilter

I would like to copy some of the features displayed here:
So I would like the user to apply a CIBumpDistortion filter to an image and let him choose
1) where exactly he wants to apply it by letting him just touch the respective location on the image
2a) the size of the circle selection (first slider in the image above)
2b) the intensity of the CIBumpDistortion Filter (second slider in the image above)
I read some previously asked questions, but they were not really helpful and some of the solutions sounded really far from userfriendly (e.g. cropping the needed part, then reapplying it to the old image). Hope I am not asking for too much at once. Objective-C would be preferred, but any help/hint would be much appreciated really! Thank you in advance!
I wrote a demo (iPad) project that lets you apply most supported CIFilters. It interrogates each filter for the parameters it needs and has built-in support for float values as well as points and colors. For the bump distortion filter it lets you select a center point, a radius, and an input scale.
The project is called CIFilterTest. You can download the project from Github at this link: https://github.com/DuncanMC/CIFilterTest
There is quite a bit of housekeeping in the app to support the general-purpose ability to use any supported filter, but it should give you enough information to implement your own bump filter as you're asking to do.
The approach I worked out to applying a filter and getting it to render without extending outside of the bounds of the original image is to first apply a clamp filter to the image (CIAffineClamp) set to the identity transform, take the output of that filter and feed that into the input of your "target" filter (the bump distortion filter in this case) and then take the output of that and feed that into a crop filter (CICrop) with the bounds of the crop filter set to the original image size.
The method to look for in the sample project is called showImage, in ViewController.m
You wrote:
1) where exactly he wants to apply it by letting him just touch the
respective location on the image
2a) the size of the circle selection (first slider in the image above)
2b) the intensity of the CIBumpDistortion Filter (second slider in the
image above)
Well, CIBumpDistortion has those attributes:
inputCenter is the center of the effect
inputRadius is the size of the circle selection
inputScale is the intensity
Simon
To show the bump:
You have to pass the location (kCIInputCenterKey) on image with Radius Size (white Circle in your case)
func appleBumpDistort(toImage currentImage: UIImage, radius : Float, intensity: Float) -> UIImage? {
var context: CIContext = CIContext()
let currentFilter = CIFilter(name: "CIBumpDistortion")
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
currentFilter.setValue(radius, forKey: kCIInputRadiusKey)
currentFilter.setValue(intensity, forKey: kCIInputScaleKey)
currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
guard let image = currentFilter.outputImage else { return nil }
if let cgimg = context.createCGImage(image, from: image.extent) {
let processedImage = UIImage(cgImage: cgimg)
return processedImage
}
return nil
}

Unexpected result of CISourceOverCompositing when alpha is involved

When trying to place a image with 60% alpha channel over another image with 100% alpha channel on iOS using CoreImage I got a result I didn't expect. If I take the two images and place scene_2_480p over scene_480p like this:
let back: CIImage = loadImage("scene_480p", type: "jpg");
let front: CIImage = loadImage("scene_2_480p", type: "png");
let composeFilter: CIFilter = CIFilter(name: "CISourceOverCompositing");
composeFilter.setDefaults();
composeFilter.setValue(front, forKey: kCIInputImageKey);
composeFilter.setValue(back, forKey: kCIInputBackgroundImageKey);
let result: CIImage = composeFilter.outputImage;
I get this:
If I do the same with gimp, and place the same two images on two overlapping layers I get:
The result is close, but not the same. Anyone who can give an explanation of why the results are not the same and how to get the same identical result of gimp?
These are the original images I used:
I'm still not able to answer the "why" question, but by using this it is possible to get the correct result, with the proper alpha value. The scale must be set to 1.0 for the same result.

Color Vignette Core Image filter for iOS?

I have tested the vignette filter in Core Image, while good - I am wondering whether anyone has implemented color vignette effect (instead of black edges, it soften the edges) by chaining through various Core Image filters for iOS? Or points me to a tutorial to do this?
Based on the answer below, this is my code - but does not seem to have much effect.
func colorVignette(image:UIImage) -> UIImage {
let cimage = CIImage(image:image)
let whiteImage = CIImage(image:colorImage(UIColor.whiteColor(), size:image.size))
var output1 = CIFilter(name:"CIGaussianBlur", withInputParameters:[kCIInputImageKey:cimage, kCIInputRadiusKey:5]).outputImage
var output2 = CIFilter(name:"CIVignette", withInputParameters:[kCIInputImageKey:whiteImage, kCIInputIntensityKey:vignette, kCIInputRadiusKey:1]).outputImage
var output = CIFilter(name:"CIBlendWithMask", withInputParameters:[kCIInputImageKey:cimage, kCIInputMaskImageKey:output2, kCIInputBackgroundImageKey:output1]).outputImage
return UIImage(CGImage:ctx.createCGImage(output, fromRect:cimage.extent()))
}
func colorImage(color:UIColor, size:CGSize) -> UIImage {
UIGraphicsBeginImageContextWithOptions(size, false, 0)
color.setFill()
UIRectFill(CGRect(x:0, y:0, width:size.width, height:size.height))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
You could create a filter by chaining together a Gaussian Blur, a Vignette, a Blend With Mask and the original image. First blur the input image with a CIGaussianBlur. Next, apply the CIVignette filter to a solid white image of the same size. Finally, mix the original image with the blurred image using the CIBlendWithMask filter.

Resources