Adding semi-transparent images as textures in Scenekit - ios

When I add a semi-transparent image (sample) as a texture for a SCNNode, how can I specify a color attribute for the node where the image is transparent. Since I am able to specify either color or image as a material property, I am unable to specify the color value to the node. Is there a way to specify both color and image for the material property or is there a workaround to this problem.

If you are assigning the image to the contents of the transparent material property, you can change the materials transparencyMode to be either .AOne or .RGBZero.
.AOne means that transparency is derived from the images alpha channel.
.RGBZero means that transparency is derived from the luminance (the total red, green, and blue) in the image.
You cannot configure an arbitrary color to be treated as transparency without a custom shader.
However, from the looks of your sample image, I would think that assigning the sample image to the transparent material properties contents and using the .AOne transparency mode would give you the result you are looking for.

I'm posting this as a new answer because it's different from the other answer.
Based on your comment, I understand that you want to want to use an image with transparency as the diffuse content of a material, but use a background color wherever the image is transparent. In other words, you won't to use a composite of the image over a color as the diffuse contents.
Using UIImage
There are a few different ways you can achieve this composited image. The easiest and likely most familiar solution is to create a new UIImage that draws the image over the color. This new image will have the same size and scale as your image, but can be opaque since it has a solid background color.
func imageByComposing(image: UIImage, over color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image.size, true, image.scale)
defer {
UIGraphicsEndImageContext()
}
let imageRect = CGRect(origin: .zero, size: image.size)
// fill with background color
color.set()
UIRectFill(imageRect)
// draw image on top
image.drawInRect(imageRect)
return UIGraphicsGetImageFromCurrentImageContext()
}
Using this image as the contents of the diffuse material property will give you the effect that you're after.
Using Shader Modifiers
If you find yourself having to change the color very frequently (possibly animating it), you could also use custom shaders or shader modifiers to composite the image over the color.
In that case, you want to composite the image A over the color B, so that the output color (CO) is:
CO = CA + CB * (1 - ɑA)
By passing the image as the diffuse contents, and assigning the output to the diffuse content, the expression can be simplified as:
Cdiffuse = Cdiffuse + Ccolor * (1 - ɑdiffuse)
Cdiffuse += Ccolor * (1 - ɑdiffuse)
Generally the output alpha would depend on the alpha of A and B, but since B (the color) is opaque (1), the output alpha is also 1.
This can be written as a small shader modifier. Since the motivation for this solutions was to be able to change the color, the color is created as a uniform variable which can be updated in code.
// Define a color that can be set/changed from code
uniform vec3 backgroundColor;
#pragma body
// Composit A (the image) over B (the color):
// output = image + color * (1-alpha_image)
float alpha = _surface.diffuse.a;
_surface.diffuse.rgb += backgroundColor * (1.0 - alpha);
// make fully opaque (since the color is fully opaque)
_surface.diffuse.a = 1.0;
This shader modifier would then be read from the file, and set in the materials shader modifier dictionary
enum ShaderLoadingError: ErrorType {
case FileNotFound, FailedToLoad
}
func shaderModifier(named shaderName: String, fileExtension: String = "glsl") throws -> String {
guard let url = NSBundle.mainBundle().URLForResource(shaderName, withExtension: fileExtension) else {
throw ShaderLoadingError.FileNotFound
}
do {
return try String(contentsOfURL: url)
} catch {
throw ShaderLoadingError.FailedToLoad
}
}
// later, in the code that configures the material ...
do {
let modifier = try shaderModifier(named: "Composit") // the name of the shader modifier file (assuming 'glsl' file extension)
theMaterial.shaderModifiers = [SCNShaderModifierEntryPointSurface: modifier]
} catch {
// Handle the error here
print(error)
}
You would then be able to change the color by setting a new value for the "backgroundColor" of the material. Note that there is no initial value, so one would have to be set.
let backgroundColor = SCNVector3Make(1.0, 0.0, 0.7) // r, g, b
// Set the color components as an SCNVector3 wrapped in an NSValue
// for the same key as the name of the uniform variable in the sahder modifier
theMaterial.setValue(NSValue(SCNVector3: backgroundColor), forKey: "backgroundColor")
As you can see, the first solution is simpler and the one I would recommend if it suits your needs. The second solution is more complicated, but enabled the background color to be animated.

Just in case someone comes across this in the future... for some tasks, ricksters solution is likely the easiest. In my case, I wanted to display a grid on top of an image that was mapped to a sphere. I originally composited the images into one and applied them, but over time I got more fancy and this started getting complex. So I made two spheres, one inside the other. I put the grid on the inner one and the image on the outer one and presto...
let outSphereGeometry = SCNSphere(radius: 20)
outSphereGeometry.segmentCount = 100
let outSphereMaterial = SCNMaterial()
outSphereMaterial.diffuse.contents = topImage
outSphereMaterial.isDoubleSided = true
outSphereGeometry.materials = [outSphereMaterial]
outSphere = SCNNode(geometry: outSphereGeometry)
outSphere.position = SCNVector3(x: 0, y: 0, z: 0)
let sphereGeometry = SCNSphere(radius: 10)
sphereGeometry.segmentCount = 100
sphereMaterial.diffuse.contents = gridImage
sphereMaterial.isDoubleSided = true
sphereGeometry.materials = [sphereMaterial]
sphere = SCNNode(geometry: sphereGeometry)
sphere.position = SCNVector3(x: 0, y: 0, z: 0)
I was surprised that I didn't need to set sphereMaterial.transparency, it seems to get this automatically.

Related

Can I make shadow that can look through transparent object with scenekit and arkit?

I made transparent object with scenekit and linked with arkit.
I made a shadow with lightning material but can't see the shadow look through the transparent object.
I made a plane and placed the object on it.
And give the light to a transparent object.
the shadow appears behind the object but can not see through the object.
Here's code that making the shadow.
let light = SCNLight()
light.type = .directional
light.castsShadow = true
light.shadowRadius = 200
light.shadowColor = UIColor(red: 0, green: 0, blue: 0, alpha: 0.3)
light.shadowMode = .deferred
let constraint = SCNLookAtConstraint(target: model)
lightNode = SCNNode()
lightNode!.light = light
lightNode!.position = SCNVector3(model.position.x + 10, model.position.y + 30, model.position.z+30)
lightNode!.eulerAngles = SCNVector3(45.0, 0, 0)
lightNode!.constraints = [constraint]
sceneView.scene.rootNode.addChildNode(lightNode!)
And the below code is for making a floor under the bottle.
let floor = SCNFloor()
floor.reflectivity = 0
let material = SCNMaterial()
material.diffuse.contents = UIColor.white
material.colorBufferWriteMask = SCNColorMask(rawValue:0)
floor.materials = [material]
self.floorNode = SCNNode(geometry: floor)
self.floorNode!.position = SCNVector3(x, y, z)
self.sceneView.scene.rootNode.addChildNode(self.floorNode!)
I think it can be solved with simple property but I can't figure out.
How can I solve the problem?
A known issue with deferred shading is that it doesn’t work with transparency so you may have to remove that line and use the default forward shading again. That said, the “simple property” you are looking for is the .renderingOrder property on the SCNNode. Set it to 99 for example. Normally the rendering order doesn’t matter because the z buffer is used to determine what pixel is in front of others. For the shadow to show up through the transparant part of the object you need to make sure the object is rendered last.
On a different note, assuming you used some of the material settings I posted on your other question, try setting the shininess value to something like 0.4.
Note that this will still create a shadow as if the object was not transparent at all, so it won’t create a darker shadow for the label and cap. For additional realism you could opt to fake the shadow entirely, as in using a texture for the shadow and drop that on a plane which you rotate and skew as needed. For even more realism, you could fake the caustics that way too.
You may also want to add a reflection map to the reflective property of the material. Almost the same as texture map but in gray scale, where the label and cap are dark gray (not very reflective) and a lighter gray for the glass portion (else it will look like the label is on the inside of the glass). Last tip: use a Shell modifier (that’s what it’s called in 3Ds max anyway) to give the glass model some thickness.

How do you play a video with alpha channel using AVFoundation?

I have an AR application which uses SceneKit, and imports a video on to scene using AVPlayer and thereby adding it as a child node of an SKVideo node.
The video is visible as it is supposed to, but the transparency in the video is not achieved.
Code as follows:
let spriteKitScene = SKScene(size: CGSize(width: self.sceneView.frame.width, height: self.sceneView.frame.height))
spriteKitScene.scaleMode = .aspectFit
guard let fileURL = Bundle.main.url(forResource: "Triple_Tap_1", withExtension: "mp4") else {
return
}
let videoPlayer = AVPlayer(url: fileURL)
videoPlayer.actionAtItemEnd = .none
let videoSpriteKitNode = SKVideoNode(avPlayer: videoPlayer)
videoSpriteKitNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0)
videoSpriteKitNode.size = spriteKitScene.size
videoSpriteKitNode.yScale = -1.0
videoSpriteKitNode.play()
spriteKitScene.backgroundColor = .clear
spriteKitScene.addChild(videoSpriteKitNode)
let background = SCNPlane(width: CGFloat(2), height: CGFloat(2))
background.firstMaterial?.diffuse.contents = spriteKitScene
let backgroundNode = SCNNode(geometry: background)
backgroundNode.position = position
backgroundNode.constraints = [SCNBillboardConstraint()]
backgroundNode.rotation.z = 0
self.sceneView.scene.rootNode.addChildNode(backgroundNode)
// Create a transform with a translation of 0.2 meters in front of the camera.
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
let transform = simd_mul((self.session.currentFrame?.camera.transform)!, translation)
// Add a new anchor to the session.
let anchor = ARAnchor(transform: transform)
self.sceneView.session.add(anchor: anchor)
What could be the best way to implement the transparency of the Triple_Tap_1 video in this case.
I have gone through some stack overflow questions on this topic, and found the only solution to be a KittyBoom repository that was created somewhere in 2013, using Objective C.
I'm hoping that the community can reveal a better solution for this problem. GPUImage library is not something I could get to work.
I've came up with two ways of making this possible. Both utilize surface shader modifiers. Detailed information on shader modifiers can be found in Apple Developer Documentation.
Here's an example project I've created.
1. Masking
You would need to create another video that represents a transparency mask. In that video black = fully opaque, white = fully transparent (or any other way you would like to represent transparency, you would just need to tinker the surface shader).
Create a SKScene with this video just like you do in the code you provided and put it into material.transparent.contents (the same material that you put diffuse video contents into)
let spriteKitOpaqueScene = SKScene(...)
let spriteKitMaskScene = SKScene(...)
... // creating SKVideoNodes and AVPlayers for each video etc
let material = SCNMaterial()
material.diffuse.contents = spriteKitOpaqueScene
material.transparent.contents = spriteKitMaskScene
let background = SCNPlane(...)
background.materials = [material]
Add a surface shader modifier to the material. It is going to "convert" black color from the mask video (well, actually red color, since we only need one color component) into alpha.
let surfaceShader = "_surface.transparent.a = 1 - _surface.transparent.r;"
material.shaderModifiers = [ .surface: surfaceShader ]
That's it! Now the white color on the masking video is going to be transparent on the plane.
However you would have to take extra care of syncronizing these two videos since AVPlayers will probably get out of sync. Sadly I didn't have time to address that in my example project (yet, I will get back to it when I have time). Look into this question for a possible solution.
Pros:
No artifacts (if syncronized)
Precise
Cons:
Requires two videos instead of one
Requires synchronisation of the AVPlayers
2. Chroma keying
You would need a video that has a vibrant color as a background that would represent parts that should be transparent. Usually green or magenta are used.
Create a SKScene for this video like you normally would and put it into material.diffuse.contents.
Add a chroma key surface shader modifier which will cut out the color of your choice and make these areas transparent. I've lent this shader from GPUImage and I don't really know how it actually works. But it seems to be explained in this answer.
let surfaceShader =
"""
uniform vec3 c_colorToReplace = vec3(0, 1, 0);
uniform float c_thresholdSensitivity = 0.05;
uniform float c_smoothing = 0.0;
#pragma transparent
#pragma body
vec3 textureColor = _surface.diffuse.rgb;
float maskY = 0.2989 * c_colorToReplace.r + 0.5866 * c_colorToReplace.g + 0.1145 * c_colorToReplace.b;
float maskCr = 0.7132 * (c_colorToReplace.r - maskY);
float maskCb = 0.5647 * (c_colorToReplace.b - maskY);
float Y = 0.2989 * textureColor.r + 0.5866 * textureColor.g + 0.1145 * textureColor.b;
float Cr = 0.7132 * (textureColor.r - Y);
float Cb = 0.5647 * (textureColor.b - Y);
float blendValue = smoothstep(c_thresholdSensitivity, c_thresholdSensitivity + c_smoothing, distance(vec2(Cr, Cb), vec2(maskCr, maskCb)));
float a = blendValue;
_surface.transparent.a = a;
"""
shaderModifiers = [ .surface: surfaceShader ]
To set uniforms use setValue(:forKey:) method.
let vector = SCNVector3(x: 0, y: 1, z: 0) // represents float RGB components
setValue(vector, forKey: "c_colorToReplace")
setValue(0.3 as Float, forKey: "c_smoothing")
setValue(0.1 as Float, forKey: "c_thresholdSensitivity")
The as Float part is important, otherwise Swift is going to cast the value as Double and shader will not be able to use it.
But to get a precise masking from this you would have to really tinker with the c_smoothing and c_thresholdSensitivity uniforms. In my example project I ended up having a little green rim around the shape, but maybe I just didn't use the right values.
Pros:
only one video required
simple setup
Cons:
possible artifacts (green rim around the border)

MTKView Displaying Wide Gamut P3 Colorspace

I'm building a real-time photo editor based on CIFilters and MetalKit. But I'm running into an issue with displaying wide gamut images in a MTKView.
Standard sRGB images display just fine, but Display P3 images are washed out.
I've tried setting the CIContext.render colorspace as the image colorspace, and still experience the issue.
Here are snippets of the code:
guard let inputImage = CIImage(mtlTexture: sourceTexture!) else { return }
let outputImage = imageEditor.processImage(inputImage)
print(colorSpace)
context.render(outputImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: inputImage.extent,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
let pickedImage = info[UIImagePickerControllerOriginalImage] as! UIImage
print(pickedImage.cgImage?.colorSpace)
if let cspace = pickedImage.cgImage?.colorSpace {
colorSpace = cspace
}
I have found a similar issue on the Apple developer forums, but without any answers: https://forums.developer.apple.com/thread/66166
In order to support the wide color gamut, you need to set the colorPixelFormat of your MTKView to either BGRA10_XR or bgra10_XR_sRGB. I suspect the colorSpace property of macOS MTKViews won't be supported on iOS because color management in iOS is not active but targeted (read Best practices for color management).
Without seeing your images and their actual values, it is hard to diagnose, but I'll explain my findings & experiments. I suggest you start like I did, by debugging a single color.
For instance, what's the reddest point in P3 color space? It can be defined through a UIColor like this:
UIColor(displayP3Red: 1, green: 0, blue: 0, alpha: 1)
Add a UIButton to your view with the background set to that color for debugging purposes. You can either get the components in code to see what those values become in sRGB,
var fRed : CGFloat = 0
var fGreen : CGFloat = 0
var fBlue : CGFloat = 0
var fAlpha : CGFloat = 0
let c = UIColor(displayP3Red: 1, green: 0, blue: 0, alpha: 1)
c.getRed(&fRed, green: &fGreen, blue: &fBlue, alpha: &fAlpha)
or you can use the Calculator in macOS Color Sync Utility,
Make sure you select Extended Range, otherwise the values will be clamped to 0 and 1.
So, as you can see, your P3(1, 0, 0) corresponds to (1.0930, -0.2267, -0.1501) in extended sRGB.
Now, back to your MTKView,
If you set the colorPixelFormat of your MTKView to .BGRA10_XR, then you obtain the brightest red if the output of your shader is,
(1.0930, -0.2267, -0.1501)
If you set the colorPixelFormat of your MTKView to .bgra10_XR_sRGB, then you obtain the brightest red if the output of your shader is,
(1.22486, -0.0420312, -0.0196301)
because you have to write a linear RGB value, since this texture format will apply the gamma correction for you. Be careful when applying the inverse gamma, since there are negative values. I use this function,
let f = {(c: Float) -> Float in
if fabs(c) <= 0.04045 {
return c / 12.92
}
return sign(c) * powf((fabs(c) + 0.055) / 1.055, 2.4)
}
The last missing piece is creating a wide gamut UIImage. Set the color space to CGColorSpace.displayP3 and copy the data over. But what data, right? The brightest red in this image will be
(1, 0, 0)
or (65535, 0, 0) in 16-bit ints.
What I do in my code is using .rgba16Unorm textures to manipulate images in displayP3 color space, where (1, 0, 0) will be the brightest red in P3. This way, I can directly copy over its contents to a UIImage. Then, for displaying, I pass a color transform to the shader to convert from P3 to extended sRGB (so, not saturating colors) before displaying. I use linear color, so my transform is just a 3x3 matrix. I set my view to .bgra10_XR_sRGB, so the gamma will be applied automatically for me.
That (column-major) matrix is,
1.2249 -0.2247 0
-0.0420 1.0419 0
-0.0197 -0.0786 1.0979
You can read about how I generated it here: Exploring the display-P3 color space
Here's an example I built using UIButtons and an MTKView, screen-captured on an iPhoneX,
The button on the left is the brightest red on sRGB, while the button on the right is using a displayP3 color. At the center, I placed an MTKView that outputs the transformed linear color as described above.
Same experiment for green,
Now, if you see this on a recent iPhone or iPad, you should see the both the square in the center and the button to the right have the same bright colors. If you see this on a Mac that can't display them, the left button will appear the same color. If you see this in a Windows machine or a browser without proper color management, the left button may also appear to be of a different color, but that's only because the whole image is interpreted as sRGB and obviously those pixels have different values... But the appearance won't be correct.
If you want more references, check the testP3UIColor unit test I added here: ColorTests.swift,
my functions to initialize the UIImage: Image.swift,
and a sample app to try out the conversions: SampleColorPalette
I haven't experimented with CIImages, but I guess the same principles apply.
I hope this information is of some help. It also took me long to figure out how to display colors properly because I couldn't find any explicit reference to displayP3 support in the Metal SDK documentation.

SKEffectNode to an SKTexture?

SKEffectionNodes have a shouldRasterise "switch" that bakes them into a bitmap, and doesn't update them until such time as the underlying nodes that are impacted by the effect are changed.
However I can't find a way to create an SKTexture from this rasterised "image".
Is it possible to get a SKTexture from a SKEffectNode?
I think you could try a code like this (it's just an example):
if let effect = SKEffectNode.init(fileNamed: "myeffect") {
effect.shouldRasterize = true
self.addChild(effect)
...
let texture = SKView().texture(from: self)
}
Update:
After you answer, hope I understood better what do you want to achieve.
This is my point of view: if you want to make a shadow of a texture, you could simply create an SKSpriteNode with this texture:
let shadow = SKSpriteNode.init(texture: <yourTexture>)
shadow.blendMode = SKBlendMode.alpha
shadow.colorBlendFactor = 1
shadow.color = SKColor.black
shadow.alpha = 0.25
What I want to say is that you could proceed step by step:
get your texture
elaborate your texture (add filters, make some other effect..)
get shadow
This way of working produces a series of useful methods you could use in your project to build other kind of elements.
Maybe, by separating the tasks you don't need to use texture(from:)
I've figured this out, in a way that solves my problems, using a Factory.
Read more on how to make a factory, from BenMobile's patient and clear articulation, here: Factory creation and use for making Sprites and Shapes
There's an issue with blurring a SKTexture or SKSpriteNode in that it's going to run out of space. The blur/glow goes beyond the edges of the sprite. To solve this, in the below, you'll see I've created a "framer" object. This is simply an empty SKSpriteNode that's double the size of the texture to be blurred. The texture to be blurred is added as a child, to this "framer" object.
It works, regardless of how hacky this is ;)
Inside a static factory class file:
import SpriteKit
class Factory {
private static let view:SKView = SKView() // the magic. This is the rendering space
static func makeShadow(from source: SKTexture, rgb: SKColor, a: CGFloat) -> SKSpriteNode {
let shadowNode = SKSpriteNode(texture: source)
shadowNode.colorBlendFactor = 0.5 // near 1 makes following line more effective
shadowNode.color = SKColor.gray // makes for a darker shadow. White for "glow" shadow
let textureSize = source.size()
let doubleTextureSize = CGSize(width: textureSize.width * 2, height: textureSize.height * 2)
let framer = SKSpriteNode(color: UIColor.clear, size: doubleTextureSize)
framer.addChild(shadowNode)
let blurAmount = 10
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(blurAmount, forKey: kCIInputRadiusKey)
let fxNode = SKEffectNode()
fxNode.filter = filter
fxNode.blendMode = .alpha
fxNode.addChild(framer)
fxNode.shouldRasterize = true
let tex = view.texture(from: fxNode) // ‘view’ refers to the magic first line
let shadow = SKSpriteNode(texture: tex) //WHOOPEE!!! TEXTURE!!!
shadow.colorBlendFactor = 0.5
shadow.color = rgb
shadow.alpha = a
shadow.zPosition = -1
return shadow
}
}
Inside anywhere you can access the Sprite you want to make a shadow or glow texture for:
shadowSprite = Factory.makeShadow(from: button, rgb: myColor, a: 0.33)
shadowSprite.position = CGPoint(x: self.frame.midX, y: self.frame.midY - 5)
addChild(shadowSprite)
-
button is a texture of the button to be given a shadow. a: is an alpha setting (actually transparency level, 0.0 to 1.0, where 1.0 is fully opaque) the lower this is the lighter the shadow will be.
The positioning serves to drop the shadow slightly below the button so it looks like light is coming from the top, casting shadows down and onto the background.

How to tint only one part of the UIImage with alpha channel - PNG (replacing color)?

I have this transparent image:
My goal is to change the "ME!" parts color. Either tint only the last 3rd of the image, or replace the blue color with the new color.
Expected result after color change:
Unfortunately neither worked for me. To change the specific color I tried this: LINK, but as the documentation says, this works only without alpha channel!
Then I tried this one: LINK, but this actually does nothing, no tint or anything.
Is there any other way to tint only one part of the color or just replace a specific color?
I know I could slice the image in two parts, but I hope there is another way.
It turns out to be surprisingly complicated—you’d think you could do it in one pass with CoreGraphics blend modes, but from pretty extensive experimentation I haven’t found such a way that doesn’t mangle the alpha channel or the coloration. The solution I landed on is this:
Start with a grayscale/alpha version of your image rather than a colored one: black in the areas you don’t want tinted, white in the areas you do
Create an image context with your image’s dimensions
Fill that context with black
Draw the image into the context
Get a new image (let’s call it “the-image-over-black”) from that context
Clear the context (so you can use it again)
Fill the context with the color you want the tinted part of your image to be
Draw the-image-over-black into the context with the “multiply” blend mode
Draw the original image into the context with the “destination in” blend mode
Get your final image from the context
The reason this works is because of the combination of blend modes. What you’re doing is creating a fully-opaque black-and-white image (step 5), then multiplying it by your final color (step 8), which gives you a fully opaque black-and-your-final-color image. Then, you take the original image, which still has its alpha channel, and draw it with the “destination in” blend mode which takes the color from the black-and-your-color image and the alpha channel from the original image. The result is a tinted image with the original brightness values and alpha channel.
Objective-C
- (UIImage *)createTintedImageFromImage:(UIImage *)originalImage color:(UIColor *)desiredColor {
CGSize imageSize = originalImage.size;
CGFloat imageScale = originalImage.scale;
CGRect contextBounds = CGRectMake(0, 0, imageSize.width, imageSize.height);
UIGraphicsBeginImageContextWithOptions(imageSize, NO /* not opaque */, imageScale); // 2
[[UIColor blackColor] setFill]; // 3a
UIRectFill(contextBounds); // 3b
[originalImage drawAtPoint:CGPointZero]; // 4
UIImage *imageOverBlack = UIGraphicsGetImageFromCurrentImageContext(); // 5
CGContextClear(UIGraphicsGetCurrentImageContext()); // 6
[desiredColor setFill]; // 7a
UIRectFill(contextBounds); // 7b
[imageOverBlack drawAtPoint:CGPointZero blendMode:kCGBlendModeMultiply alpha:1]; // 8
[originalImage drawAtPoint:CGPointZero blendMode:kCGBlendModeDestinationIn alpha:1]; // 9
finalImage = UIGraphicsGetImageFromCurrentContext(); // 10
UIGraphicsEndImageContext();
return finalImage;
}
Swift 4
func createTintedImageFromImage(originalImage: UIImage, desiredColor: UIColor) -> UIImage {
let imageSize = originalImage.size
let imageScale = originalImage.scale
let contextBounds = CGRect(origin: .zero, size: imageSize)
UIGraphicsBeginImageContextWithOptions(imageSize, false /* not opaque */, imageScale) // 2
defer { UIGraphicsEndImageContext() }
UIColor.black.setFill() // 3a
UIRectFill(contextBounds) // 3b
originalImage.draw(at: .zero) // 4
guard let imageOverBlack = UIGraphicsGetImageFromCurrentImageContext() else { return originalImage } // 5
desiredColor.setFill() // 7a
UIRectFill(contextBounds) // 7b
imageOverBlack.draw(at: .zero, blendMode: .multiply, alpha: 1) // 8
originalImage.draw(at: .zero, blendMode: .destinationIn, alpha: 1) // 9
guard let finalImage = UIGraphicsGetImageFromCurrentImageContext() else { return originalImage } // 10
return finalImage
}
There are lots of ways to do this.
Core image filters come to mind as a good way to go. Since the part you want to change is a unique color, you could use the Core image CIHueAdjust filter to shift the hue from blue to red. Only the word you want to change has any color to it, so that's all it would change.
If you had an image with various colors in it and still wanted to replace ALL The colors in the image you could use CIColorCube to map the blue pixels to red without affecting other colors. There was a thread on this board last week with sample code using CIColorCube to force one color to another. Search on CIColorCube and look for the most recent post and you should be able to find it.
If you wanted to limit the change to a specific area of the screen you could probably come up with a sequence of core image filters that would limit your changes to just the target area.
You could also slice out the part you want to change, color edit it using a any of variety of techniques, and then composite it back together.
Another way is to use CoreImage filter - ColorCube.
Made category for myself when had this problem. It is for NSImage, but I think should work for UIImage after some update
https://github.com/braginets/NSImage-replace-color

Resources