I am compositing two UIImages together using a UIGraphicsContext.
It works for the most part, but using the Hard Light CGBlendMode, I am getting strange artifacts that are not present if I use a Hard Light transfer mode in Photoshop.
I would try to use Core Image, but I need to be able to control the opacity of the top layer being transferred with Hard Light, and as far as I can tell that is not possible.
Here is the image created with a UIGraphicsContext:
Here it is using the same layers and opacity in Photoshop:
Here is the base image:
And the layer being composited with 60% opacity using Hard Light:
I have tried setting the context interpolation quality to high, using a transparency layer, but nothing has made any improvement.
My code is below. Does anyone know how to fix this, or alternative ways of achieving the same results that I am getting in Photoshop?
public extension UIImage {
public func compImagesWithImage(_ topImg: UIImage, blendMode: CGBlendMode, opacity: CGFloat) -> UIImage? {
let baseImg = self
if let cgImage = baseImg.cgImage {
let baseW = CGFloat(baseImg.cgImage!.width as size_t)
let baseH = CGFloat(baseImg.cgImage!.height as size_t)
UIGraphicsBeginImageContext(CGSize(width: baseW, height: baseH))
if let context = UIGraphicsGetCurrentContext() {
context.saveGState();
context.interpolationQuality = .high
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: baseH)
context.concatenate(flipVertical)
context.setAlpha(1.0)
context.setBlendMode(CGBlendMode.normal)
context.draw(cgImage, in: CGRect(origin: CGPoint.zero, size: CGSize(width: CGFloat(baseW), height: CGFloat(baseH))))
context.setAlpha(opacity)
context.setBlendMode(blendMode)
context.beginTransparencyLayer(auxiliaryInfo: nil)
context.draw(topImg.cgImage!, in: CGRect(origin: CGPoint.zero, size: CGSize(width: CGFloat(baseW), height: CGFloat(baseH))))
context.endTransparencyLayer()
context.restoreGState();
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
return nil
}
}
This fixed it: Higlight Artifacts When Using Hard Light Blend Mode
I just had to set the alpha of the base layer to 0.99.
Here is the updated code:
public func compImagesWithImage(_ topImg: UIImage, blendMode: CGBlendMode, opacity: CGFloat) -> UIImage? {
let baseImg = self
if let cgImage = baseImg.cgImage {
let baseW = CGFloat(baseImg.cgImage!.width as size_t)
let baseH = CGFloat(baseImg.cgImage!.height as size_t)
UIGraphicsBeginImageContext(CGSize(width: baseW, height: baseH))
if let context = UIGraphicsGetCurrentContext() {
context.saveGState();
context.interpolationQuality = .high
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: baseH)
context.concatenate(flipVertical)
context.setAlpha(0.99)
context.setBlendMode(CGBlendMode.normal)
context.draw(cgImage, in: CGRect(origin: CGPoint.zero, size: CGSize(width: CGFloat(baseW), height: CGFloat(baseH))))
context.setAlpha(opacity)
context.setBlendMode(blendMode)
context.beginTransparencyLayer(auxiliaryInfo: nil)
context.draw(topImg.cgImage!, in: CGRect(origin: CGPoint.zero, size: CGSize(width: CGFloat(baseW), height: CGFloat(baseH))))
context.endTransparencyLayer()
context.restoreGState();
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
Related
I have the following functions.
extension UIImage
{
var width: CGFloat
{
return size.width
}
var height: CGFloat
{
return size.height
}
private static func circularImage(diameter: CGFloat, color: UIColor) -> UIImage
{
UIGraphicsBeginImageContextWithOptions(CGSize(width: diameter, height: diameter), false, 0)
let context = UIGraphicsGetCurrentContext()!
context.saveGState()
let rect = CGRect(x: 0, y: 0, width: diameter, height: diameter)
context.setFillColor(color.cgColor)
context.fillEllipse(in: rect)
context.restoreGState()
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
private func addCentered(image: UIImage, tintColor: UIColor) -> UIImage
{
let topImage = image.withTintColor(tintColor, renderingMode: .alwaysTemplate)
let bottomImage = self
UIGraphicsBeginImageContext(size)
let bottomRect = CGRect(x: 0, y: 0, width: bottomImage.width, height: bottomImage.height)
bottomImage.draw(in: bottomRect)
let topRect = CGRect(x: (bottomImage.width - topImage.width) / 2.0,
y: (bottomImage.height - topImage.height) / 2.0,
width: topImage.width,
height: topImage.height)
topImage.draw(in: topRect, blendMode: .normal, alpha: 1.0)
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return mergedImage
}
}
They work fine, but how do I properly apply UIScreen.main.scale to support retina screens?
I've looked at what's been done here but can't figure it out yet.
Any ideas?
Accessing UIScreen.main.scale itself is a bit problematic, as you have to access it only from main thread (while you usually want to put a heavier image processing on a background thread). So I suggest one of these ways instead.
First of all, you can replace UIGraphicsBeginImageContext(size) with
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
The last argument (0.0) is a scale, and based on docs "if you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen."
If instead you want to retain original image's scale on resulting UIImage, you can do this: after topImage.draw, instead of getting the UIImage with UIGraphicsGetImageFromCurrentImageContext, get CGImage with
let cgImage = context.makeImage()
and then construct UIImage with the scale and orientation of the original image (as opposed to defaults)
let mergedImage = UIImage(
cgImage: cgImage,
scale: image.scale,
orientation: image.opientation)
func thumbImage(image: UIImage) -> UIImage {
let cgSize: CGSize = CGSize(width: 100, height: 100)
let thumb = UIGraphicsImageRenderer(size: cgSize)
return thumb.image { _ in
image.draw(in: CGRect(origin: .zero, size: cgSize))
}
}
The final image is 300x300.
I would like, not matter the iPhone screen resolution, to have the image to be 100x100 (it is a square image of course).
How modify this code to achieve this result?
(I'm open to alternate ways of achieving this)
func thumbImage(image: UIImage, pxWidth: Int, pxHeight:Int ) -> UIImage {
let cgSize: CGSize = CGSize(width: pxWidth, height: pxHeight)
let rect = CGRect(x: 0, y: 0, width: pxWidth, height: pxHeight)
UIGraphicsBeginImageContextWithOptions(cgSize, false, 1.0)
image.draw(in: rect)
let thumb = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let compressedThumb = thumb!.jpegData(compressionQuality: 0.70)
return UIImage(data: compressedThumb!)!
}
This alternative with UIGraphicsBeginImageContextWithOptions works and keeps the code as short as the initial one. (I also added some compression and conversion code).
I have two images, which both have alpha channel. I want to comine them together. It do works for UIImageView, but I want to do it by using cgimage, which without creat a UIImageView.
I tried cgimage like this is not working:
func combine3(_ bg: UIImage, cover: UIImage) -> UIImage?{
let size = CGSize(width: self.view.frame.width, height: self.view.frame.height)
UIGraphicsBeginImageContext(size)
let maskRef = cover.cgImage!
let mask = CGImage.init(maskWidth: maskRef.width, height: maskRef.height, bitsPerComponent: maskRef.bitsPerComponent, bitsPerPixel: maskRef.bitsPerPixel, bytesPerRow: maskRef.bytesPerRow, provider: maskRef.dataProvider!, decode: nil, shouldInterpolate: false)
let masked = bg.cgImage?.masking(mask!)
let outPutImage = UIImage(cgImage: masked!)
UIGraphicsEndImageContext()
return outPutImage
}
but for UIImageView, it works pretty good:
let bg = creatBGImageFinal()!
bgIV.image = bg
let cover = createCoverImageFinal(progress: 0)!
let layer = CALayer()
layer.contents = cover.cgImage
layer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
maskIV.layer.mask = layer
bg:
cover,which is a picture with an white circle in the middle, the others is Transparent:
result:
First, reverse the mask:
The rule for cgImage masking is Dst = 1 - Src.
Then draw the masked image onto image context with opacity:
func combine3(_ bg: UIImage, cover: UIImage, size: CGSize) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(size, true, 1)
defer {
UIGraphicsEndImageContext()
}
let context = UIGraphicsGetCurrentContext()!
let maskRef = cover.cgImage!
let mask = CGImage.init(maskWidth: maskRef.width, height: maskRef.height, bitsPerComponent: maskRef.bitsPerComponent, bitsPerPixel: maskRef.bitsPerPixel, bytesPerRow: maskRef.bytesPerRow, provider: maskRef.dataProvider!, decode: nil, shouldInterpolate: false)!
let masked = bg.cgImage!.masking(mask)!
// adjust for lower-left-origin CG coordinates
context.translateBy(x: 0, y: size.height)
context.scaleBy(x: 1, y: -1)
context.draw(masked, in: CGRect(origin: .zero, size: size))
return UIGraphicsGetImageFromCurrentImageContext()
}
Result:
I know there are several other ways to do this; I don't want to import anything that I don't need to. If someone can help me with his code, that would be great.
Currently, it is only saving the original image without the watermark image.
extension UIImage {
class func imageWithWatermark(image1: UIImageView, image2: UIImageView) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image1.bounds.size, false, 0.0)
image2.layer.renderInContext(UIGraphicsGetCurrentContext()!)
image1.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
}
func addWatermark() {
let newImage = UIImage.imageWithWatermark(imageView, image2: watermarkImageView)
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil)
}
EDIT: I've got the watermark appearing on the saved images.
I had to switch the order of the layers:
image1.layer.renderInContext(UIGraphicsGetCurrentContext()!)
image2.layer.renderInContext(UIGraphicsGetCurrentContext()!)
HOWEVER, it is not appearing in the correct place.It seems to always appear in the center of the image.
If you grab the UIImageViews' images you could use the following concept:
if let img = UIImage(named: "image.png"), img2 = UIImage(named: "watermark.png") {
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
UIGraphicsBeginImageContextWithOptions(img.size, true, 0)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, UIColor.whiteColor().CGColor)
CGContextFillRect(context, rect)
img.drawInRect(rect, blendMode: .Normal, alpha: 1)
img2.drawInRect(CGRectMake(x,y,width,height), blendMode: .Normal, alpha: 1)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(result, nil, nil, nil)
}
SWIFT 4
Use this
let backgroundImage = imageData!
let watermarkImage = #imageLiteral(resourceName: "jodi_url_icon")
let size = backgroundImage.size
let scale = backgroundImage.scale
UIGraphicsBeginImageContextWithOptions(size, false, scale)
backgroundImage.draw(in: CGRect(x: 0.0, y: 0.0, width: size.width, height: size.height))
watermarkImage.draw(in: CGRect(x: 10, y: 10, width: size.width, height: size.height - 40))
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Use result to UIImageView, tested.
I'm using a fairly standard subclass of MTKView that displays a CIImage, for the purposes of rapidly updating using CIFilters. Code for the draw() function is below.
The problem I have is that only the portions of the view that are covered by the image, in this case scaledImage in the code below, are actually redrawn. This means that if the new image is smaller than the previous image, I can still see portions of the old image poking out from behind it.
Is there a way to clear the content of the drawable's texture when the size of the contained image changes, so that this does not happen?
override func draw() {
guard let image = image,
let targetTexture = currentDrawable?.texture,
let commandBuffer = commandQueue.makeCommandBuffer()
else { return }
let bounds = CGRect(origin: CGPoint.zero, size: drawableSize)
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scale = min(scaleX, scaleY)
let width = image.extent.width * scale
let height = image.extent.height * scale
let originX = (bounds.width - width) / 2
let originY = (bounds.height - height) / 2
let scaledImage = image
.transformed(by: CGAffineTransform(scaleX: scale, y: scale))
.transformed(by: CGAffineTransform(translationX: originX, y: originY))
ciContext.render(
scaledImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace
)
guard let drawable = currentDrawable else { return }
commandBuffer.present(drawable)
commandBuffer.commit()
super.draw()
}
MTKView has clearColor property which you can use like so:
clearColor = MTLClearColorMake(1, 1, 1, 1)
Alternatively, if you'd like to use CIImage, you can just create one with CIColor like so:
CIImage(color: CIColor(red: 1, green: 1, blue: 1)).cropped(to: CGRect(origin: .zero, size: drawableSize))
What you can do is tell the ciContext to clear the texture before you draw to it.
So before you do this,
ciContext.render(
scaledImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace)
clear the context by first.
let renderDestination = CIRenderDestination(
width: Int(sizeToClear.width),
height: Int(sizeToClear.height),
pixelFormat: colorPixelFormat,
commandBuffer: nil) { () -> MTLTexture in
return drawable.texture
}
do {
try ciContext.startTask(toClear: renderDestination)
} catch {
}
For more info see the documentation
I have the following solution which works, but I'm not very pleased with it as it feels hacky so please, somebody tell me a better way of doing it!
What I'm doing is tracking when the size of the image changes and setting a flag called needsRefresh. During the view's draw() function I then check to see if this flag is set and, if it is, I create a new CIImage from the view's clearColor that's the same size as the drawable and render this before continuing to the image I actually want to render.
This seems really inefficient, so better solutions are welcome!
Here's the additional drawing code:
if needsRefresh {
let color = UIColor(mtkColor: clearColor)
let clearImage = CIImage(cgImage: UIImage.withColor(color, size: bounds.size)!.cgImage!)
ciContext.render(
clearImage,
to: targetTexture,
commandBuffer: commandBuffer,
bounds: bounds,
colorSpace: colorSpace
)
needsRefresh = false
}
Here's the UIColor extension for converting the clear color:
extension UIColor {
convenience init(mtkColor: MTLClearColor) {
let red = CGFloat(mtkColor.red)
let green = CGFloat(mtkColor.green)
let blue = CGFloat(mtkColor.blue)
let alpha = CGFloat(mtkColor.alpha)
self.init(red: red, green: green, blue: blue, alpha: alpha)
}
}
And finally, a small extension for creating CIImages from a given background colour:
extension UIImage {
static func withColor(_ color: UIColor, size: CGSize = CGSize(width: 10, height: 10)) -> UIImage? {
UIGraphicsBeginImageContext(size)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(color.cgColor)
context.fill(CGRect(origin: .zero, size: size))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}