How get round layer mask? - ios

I have a square CIImage.
I want to add round layer of a mask and to impose it on this image.
Should I use CIFilter?

To do it entirely with CoreImage you can follow this recipe:
let image = CIImage(contentsOf: URL(string: "https://s3.amazonaws.com/uploads.hipchat.com/27364/1957261/bJERqq8O2V3NVok/upload.jpg")!)
// Construct a circle
let circle = CIFilter(name: "CIRadialGradient", withInputParameters:[
"inputRadius0":100,
"inputRadius1":100,
"inputColor0":CIColor(red: 1, green: 1, blue: 1, alpha:1),
"inputColor1":CIColor(red: 0, green: 0, blue: 0, alpha:0)
])?.outputImage
// Turn the circle into an alpha mask
let mask = CIFilter(name: "CIMaskToAlpha", withInputParameters: [kCIInputImageKey:circle!])?.outputImage
// Apply the mask to the input image
let combine = CIFilter(name: "CIBlendWithAlphaMask", withInputParameters:[
kCIInputMaskImageKey:mask!,
kCIInputImageKey:image!
])
let output = combine?.outputImage
Although this only works for circles

Use like this , For round image with border
self.profileImageView.layer.cornerRadius = self.profileImageView.frame.size.width / 2
self.profileImageView.clipsToBounds = YES;
self.profileImageView.layer.borderWidth = 3.0f;
self.profileImageView.layer.borderColor = UIColor.whiteColor.CGColor

maybe this can help to create a round image using core graphic.. they create a round blur image using core graphic.
Blur specific part of an image (rectangular, circular)?

I solve my problem :).
I use extension.
override var outputImage : CIImage!
{
if let inputImage = inputImage {
//making UIImage from CIImage
let uiImage = UIImage(ciImage: inputImage)
//use extension
let roundImage = uiImage.roundedRectImageFromImage(image:uiImage, imageSize: CGSize.init(width: uiImage.size.width/2, height: uiImage.size.width/2), cornerRadius: uiImage.size.width/2)
//get back CIImage
let ciImage = CIImage(image: roundImage)
///There other code.....
}
extension UIImage {
func roundedRectImageFromImage(image:UIImage,imageSize:CGSize,cornerRadius:CGFloat)->UIImage{
UIGraphicsBeginImageContextWithOptions(imageSize,false,0.0)
let bounds=CGRect(origin: CGPoint.zero, size: imageSize)
UIBezierPath(roundedRect: bounds, cornerRadius: cornerRadius).addClip()
image.draw(in: bounds)
let finalImage=UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage!
}
}

Related

Sharing PKCanvasView with a subview as an Image

I have a PKCanvasView with a subview of a UIImageView.
I'd like to share the view as a UIImage. This would be the UIImageView with the PKCanvasView.drawing that is above it.
Here is how I am getting the drawing portion. I am just unsure how to get the UIImageView with it.
private func image(from canvas: PKCanvasView) -> UIImage {
let drawing = canvas.drawing
let visibleRect = canvas.bounds
let image = drawing.image(from: visibleRect, scale: UIScreen.main.scale)
return image
}
Update:
Here is what I am currently trying:
let format = UIGraphicsImageRendererFormat.default()
format.opaque = false
let visibleRect = CGRect(x: 0, y: 0, width: self.imageView.frame.size.width, height: self.imageView.frame.size.height)
let image = UIGraphicsImageRenderer(size: self.imageView.frame.size, format: format).image { context in
imageView.image?.draw(in: visibleRect, blendMode: .normal, alpha: 1.0)
canvasView.drawing.image(from: visibleRect, scale: UIScreen.main.scale).draw(at: .zero)
}
The problem I am facing with this is my image is being stretched from top to bottom. Some images more than others.
You will need to create a new graphics context, draw your image into it and then draw your drawing on top of it. Your code should look something like this:
let format = UIGraphicsImageRendererFormat.default()
format.opaque = false
let image = UIGraphicsImageRenderer(size: imageView.frame.size, format: format).image { context in
imageView.image?.draw(at: .zero)
canvas.drawing.image(from: imageView.frame, scale: UIScreen.main.scale).draw(at: .zero)
}

MTKView displaying CIImage goes wrong when image changes size

I'm using a fairly standard subclass of MTKView that displays a CIImage, for the purposes of rapidly updating using CIFilters. Code for the draw() function is below.
The problem I have is that only the portions of the view that are covered by the image, in this case scaledImage in the code below, are actually redrawn. This means that if the new image is smaller than the previous image, I can still see portions of the old image poking out from behind it.
Is there a way to clear the content of the drawable's texture when the size of the contained image changes, so that this does not happen?
override func draw() {
guard let image = image,
let targetTexture = currentDrawable?.texture,
let commandBuffer = commandQueue.makeCommandBuffer()
else { return }
let bounds = CGRect(origin: CGPoint.zero, size: drawableSize)
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let width = image.extent.width * scale
let height = image.extent.height * scale
let originX = (bounds.width - width) / 2
let originY = (bounds.height - height) / 2
let scaledImage = image
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
.transformed(by: CGAffineTransform(translationX: originX, y: originY))
ciContext.render(
scaledImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace
)
guard let drawable = currentDrawable else { return }
commandBuffer.present(drawable)
commandBuffer.commit()
super.draw()
}
MTKView has clearColor property which you can use like so:
clearColor = MTLClearColorMake(1, 1, 1, 1)
Alternatively, if you'd like to use CIImage, you can just create one with CIColor like so:
CIImage(color: CIColor(red: 1, green: 1, blue: 1)).cropped(to: CGRect(origin: .zero, size: drawableSize))
What you can do is tell the ciContext to clear the texture before you draw to it.
So before you do this,
ciContext.render(
scaledImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
clear the context by first.
let renderDestination = CIRenderDestination(
width: Int(sizeToClear.width),
height: Int(sizeToClear.height),
pixelFormat: colorPixelFormat,
commandBuffer: nil) { () -> MTLTexture in
return drawable.texture
}
do {
try ciContext.startTask(toClear: renderDestination)
} catch {
}
For more info see the documentation
I have the following solution which works, but I'm not very pleased with it as it feels hacky so please, somebody tell me a better way of doing it!
What I'm doing is tracking when the size of the image changes and setting a flag called needsRefresh. During the view's draw() function I then check to see if this flag is set and, if it is, I create a new CIImage from the view's clearColor that's the same size as the drawable and render this before continuing to the image I actually want to render.
This seems really inefficient, so better solutions are welcome!
Here's the additional drawing code:
if needsRefresh {
let color = UIColor(mtkColor: clearColor)
let clearImage = CIImage(cgImage: UIImage.withColor(color, size: bounds.size)!.cgImage!)
ciContext.render(
clearImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace
)
needsRefresh = false
}
Here's the UIColor extension for converting the clear color:
extension UIColor {
convenience init(mtkColor: MTLClearColor) {
let red = CGFloat(mtkColor.red)
let green = CGFloat(mtkColor.green)
let blue = CGFloat(mtkColor.blue)
let alpha = CGFloat(mtkColor.alpha)
self.init(red: red, green: green, blue: blue, alpha: alpha)
}
}
And finally, a small extension for creating CIImages from a given background colour:
extension UIImage {
static func withColor(_ color: UIColor, size: CGSize = CGSize(width: 10, height: 10)) -> UIImage? {
UIGraphicsBeginImageContext(size)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(color.cgColor)
context.fill(CGRect(origin: .zero, size: size))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}

Swift - How to change a png tint color without losing the clear background? [duplicate]

I am new to swift and trying to achieve this essentially,
This image to
This image ->
I am using this code from here to change the tint on image but do not get the desired output
func tint(image: UIImage, color: UIColor) -> UIImage
{
let ciImage = CIImage(image: image)
let filter = CIFilter(name: "CIMultiplyCompositing")
let colorFilter = CIFilter(name: "CIConstantColorGenerator")
let ciColor = CIColor(color: color)
colorFilter.setValue(ciColor, forKey: kCIInputColorKey)
let colorImage = colorFilter.outputImage
filter.setValue(colorImage, forKey: kCIInputImageKey)
filter.setValue(ciImage, forKey: kCIInputBackgroundImageKey)
return UIImage(CIImage: filter.outputImage)!
}
If this is a noob question, apologies. I was able to do this easily in javascript but not in swift.
hi you can use it the following way. First the UIImage Extension that you used needs some updates.
you can copy the code below for Swift 3
extension UIImage{
func tint(color: UIColor, blendMode: CGBlendMode) -> UIImage
{
let drawRect = CGRect(x: 0,y: 0,width: size.width,height: size.height)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
color.setFill()
UIRectFill(drawRect)
draw(in: drawRect, blendMode: blendMode, alpha: 1.0)
let tintedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return tintedImage!
}
}
Then in your viewDidLoad where you are using the image
for example i used the image from IBOutlet imageWood
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
self.imageWood.image = self.imageWood.image?.tint(color: UIColor.green, blendMode: .saturation)
}
You will have to use the appropriate color and the images
The other Extension i found
extension UIImage {
// colorize image with given tint color
// this is similar to Photoshop's "Color" layer blend mode
// this is perfect for non-greyscale source images, and images that have both highlights and shadows that should be preserved
// white will stay white and black will stay black as the lightness of the image is preserved
func tint(_ tintColor: UIColor) -> UIImage {
return modifiedImage { context, rect in
// draw black background - workaround to preserve color of partially transparent pixels
context.setBlendMode(.normal)
UIColor.black.setFill()
context.fill(rect)
// draw original image
context.setBlendMode(.normal)
context.draw(self.cgImage!, in: rect)
// tint image (loosing alpha) - the luminosity of the original image is preserved
context.setBlendMode(.color)
tintColor.setFill()
context.fill(rect)
// mask by alpha values of original image
context.setBlendMode(.destinationIn)
context.draw(self.cgImage!, in: rect)
}
}
// fills the alpha channel of the source image with the given color
// any color information except to the alpha channel will be ignored
func fillAlpha(_ fillColor: UIColor) -> UIImage {
return modifiedImage { context, rect in
// draw tint color
context.setBlendMode(.normal)
fillColor.setFill()
context.fill(rect)
// mask by alpha values of original image
context.setBlendMode(.destinationIn)
context.draw(self.cgImage!, in: rect)
}
}
fileprivate func modifiedImage(_ draw: (CGContext, CGRect) -> ()) -> UIImage {
// using scale correctly preserves retina images
UIGraphicsBeginImageContextWithOptions(size, false, scale)
let context: CGContext! = UIGraphicsGetCurrentContext()
assert(context != nil)
// correctly rotate image
context.translateBy(x: 0, y: size.height);
context.scaleBy(x: 1.0, y: -1.0);
let rect = CGRect(x: 0.0, y: 0.0, width: size.width, height: size.height)
draw(context, rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
Using it like this
self.imageWood.image = self.imageWood.image?.tint(UIColor.purple.withAlphaComponent(1))
Try the below code, should work for you.
if let myImage = UIImage(named: "imageName")?.withRenderingMode(.alwaysTemplate) {
myImageView.image = myImage
myImageView.tintColor = UIColor.white
}
Your image is simple and only has one color. I would suggest making your image transparent except where the lines are, and then layer it on top of a white background to get your results.
It looks like you got the results you're after using drawing modes. There are also a number of Core image filters that let you apply tinting effects to images, as well as replacing specific colors. Those would be a good choice for more complex images.

Make UIImage a Circle without Losing quality? (swift)

The image becomes blurry once applying roundImage:
Making a UIImage to a circle form
extension UIImage
{
func roundImage() -> UIImage
{
let newImage = self.copy() as! UIImage
let cornerRadius = self.size.height/2
UIGraphicsBeginImageContextWithOptions(self.size, false, 1.0)
let bounds = CGRect(origin: CGPointZero, size: self.size)
UIBezierPath(roundedRect: bounds, cornerRadius: cornerRadius).addClip()
newImage.drawInRect(bounds)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
Why are you using bezierpath? Just set cornerradius for uiimageview.
If your image is larger than the imageview then you have to resize your image to your imageview size and then set cornerradius for that uiimageview.
It will work. Works for me
Replace the following line
UIGraphicsBeginImageContextWithOptions(self.size, false, 1.0)
with
UIGraphicsBeginImageContextWithOptions(self.size, view.opaque , 0.0)
try this one
let image = UIImageView(frame: CGRectMake(0, 0, 100, 100))
I recommend that you can use AlamofireImage (https://github.com/Alamofire/AlamofireImage)
It's very easily to make rounded image or circle image without losing quality.
just like this:
let image = UIImage(named: "unicorn")!
let radius: CGFloat = 20.0
let roundedImage = image.af_imageWithRoundedCornerRadius(radius)
let circularImage = image.af_imageRoundedIntoCircle()
Voila!
Your issue is that you are using scale 1, which is the lowest "quality".
Setting the scale to 0 will use the device scale, which just uses the image as is.
A side note: Functions inside a class that return a new instance of that class can be implemented as class functions. This makes it very clear what the function does. It does not manipulate the existing image. It returns a new one.
Since you were talking about circles, I also corrected your code so it will now make a circle of any image and crop it. You might want to center this.
extension UIImage {
class func roundImage(image : UIImage) -> UIImage? {
// copy
guard let newImage = image.copy() as? UIImage else {
return nil
}
// start context
UIGraphicsBeginImageContextWithOptions(newImage.size, false, 0.0)
// bounds
let cornerRadius = newImage.size.height / 2
let minDim = min(newImage.size.height, newImage.size.width)
let bounds = CGRect(origin: CGPointZero, size: CGSize(width: minDim, height: minDim))
UIBezierPath(roundedRect: bounds, cornerRadius: cornerRadius).addClip()
// new image
newImage.drawInRect(bounds)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// crop
let maybeCrop = UIImage.crop(finalImage, cropRect: bounds)
return maybeCrop
}
class func crop(image: UIImage, cropRect : CGRect) -> UIImage? {
guard let imgRef = CGImageCreateWithImageInRect(image.CGImage, cropRect) else {
return nil
}
return UIImage(CGImage: imgRef)
}
}

ios: how to blur a image with CGPath?

I create a CGPath area as shown green circle. The CGPath area need to be clear, and the rest of image will be applied blurry or translucent effect, I can clip image inside CGPath with following code:
UIGraphicsBeginImageContext(view.frame.size);
CGContextAddPath(ctx, path);
CGContextClip(ctx);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *clipImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(clipImage, nil, nil, nil);
CGPathRelease(path);
but I don't know how to apply blurry or translucent effect with CGPath simultaneously. I think I can blur origin image and merge it with clip image, but I don't know how to implement it.
You required 2 concept to achieved end result :-
i) CGBlendMode
ii) CIFilter
CGBlendMode is Used to remove the desired portion from image. Compositing operations on paths residing one another in the Current Context .
Clear mode whose rawValue is 16 help to clear the desired portion.
CIFilter help to blur the final image.
class ConvertToBlurryImage:UIView {
var originalImage:UIImage!
var finalImage:UIImage!
override func draw(_ rect: CGRect) {
super.draw(rect)
//Original Image
originalImage = UIImage(named: "originalImage.png")
//Intermediate Image
let intermediateImage = UIImage().returnBlurImage(image: originalImage)
//Final Result Image
finalImage = blendImage(image: intermediateImage)
let attachedImage = UIImageView(image: finalImage)
addSubview(attachedImage)
}
func blurryImage(image:UIImage) -> UIImage {
UIGraphicsBeginImageContext(frame.size)
image.draw(in: CGRect(origin: frame.origin, size: frame.size) )
// 16 === clear
let mode = CGBlendMode(rawValue: 16)
UIGraphicsGetCurrentContext()!.setBlendMode(mode!)
//Path that need to crop
pathToCrop()
let mode2 = CGBlendMode(rawValue: 16)
UIGraphicsGetCurrentContext()!.setBlendMode(mode2!)
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
return finalImage!
}
func pathToCrop() {
let path = UIBezierPath(ovalIn: CGRect(x: frame.width/2 - 50, y: frame.height/2 - 100 , width: 150, height: 150) )
path.fill()
path.stroke()
}
}
extension UIImage {
func returnBlurImage(image:UIImage) -> UIImage {
let beginImage = CIImage(cgImage: image.cgImage!)
let blurfilter = CIFilter(name: "CIGaussianBlur")
blurfilter?.setValue(beginImage, forKey: "inputImage")
let resultImage = blurfilter?.value(forKey: "outputImage") as! CIImage
let blurredImage = UIImage(ciImage: resultImage)
return blurredImage
}
}
Mission Achived
Final Github Demo

Resources