Where is the CGColorSpace for RGBA? - ios

I am trying to draw a circle shaded with a gradient from white to transparent. I am using Core Graphics.
Here is what I have which draws a gradient from white to black:
let colorSpace = CGColorSpaceCreateDeviceRGB();
let colors = [UIColor.white.cgColor, UIColor.black.cgColor] as CFArray;
let locations : [CGFloat] = [0.0, 1.0];
let glowGradient : CGGradient = CGGradient.init(colorsSpace: colorSpace, colors: colors, locations: locations)!;
let ctx = UIGraphicsGetCurrentContext()!;
ctx.drawRadialGradient(glowGradient, startCenter: rectCenter, startRadius: 0, endCenter: rectCenter, endRadius: imageWidthPts/2, options: []);
However, I do not want to draw white-to-black; I want to draw white-to-transparent.
To do so, I first tried changing the end color to UIColor.white.cgColor.copy(alpha: 0.0) (i.e., transparent white). However, this failed with:
fatal error: unexpectedly found nil while unwrapping an Optional value
I assume this error is due to the color being outside the specified RGB color space (CGColorSpaceCreateDeviceRGB()).
The fix would seem to be to change the specified color space to one with an alpha component, such as RGBA. However, such color spaces do not appear to exist! There are only CGColorSpaceCreateDeviceRGB, CGColorSpaceCreateDeviceCMYK, and CGColorSpaceCreateDeviceGray.
But it makes no sense for there to be no available color spaces with an alpha component. The documentation explicitly describes support for alpha in gradients. The documentation for CGGradient.init says:
For example, if the color space is an RGBA color space and you want to use two colors in the gradient (one for a starting location and another for an ending location), then you need to provide 8 values in components—red, green, blue, and alpha values for the first color, followed by red, green, blue, and alpha values for the second color.
This RGBA encoding makes perfect sense, but it's impossible to tell Core Graphics that I'm using such an RGBA encoding, because there is no RGBA color space!
Where is the CGColorSpace for RGBA?

You don't need the RGBA color space to draw transparent radial/linear gradients. The RGB color space is enough. If it's not drawing it transparently, you probably have the background color of the view or the context misconfigured.
If you're creating a context you want to make sure that you pass in false for opaque: Iphone How to make context background transparent?
If you're using a CALayer on a UIView, you need to make sure that the UIView's background color is set to UIColor.clear. If it's set to nil, you'll end up with the gradient blending with black instead.

A slightly unsatisfactory answer: you can get the color space of a context with .colorSpace. In my case, it seems to give me an RGBA space, but I can't see any guarantee of this.
Here's the gradient using .colorSpace:
let ctx = UIGraphicsGetCurrentContext()!;
let colorSpace = ctx.colorSpace!;
let colorComponents : [CGFloat] = [
// R G B A
1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 0.0,
];
let locations : [CGFloat] = [0.0, 1.0];
let glowGradient : CGGradient = CGGradient.init(
colorSpace: colorSpace,
colorComponents: colorComponents,
locations: locations,
count: locations.count
)!;
ctx.drawRadialGradient(glowGradient, startCenter: rectCenter, startRadius: 0, endCenter: rectCenter, endRadius: imageWidthPts/2, options: []);
It's particularly confusing that in my case, the colorSpace.numberOfComponents evaluates to 3, i.e. not RGBA, and yet it still correctly interprets the alpha component in the gradient. ¯\_(ツ)_/¯

Related

How to programmatically set color in gimp (pythonfu)?

Using Python, I was able to add a floating text layer using
text_layer = pdb.gimp_text_fontname(image, drawable, x, y, text, border, antialias, size, size_type, fontname)
However, the text appears in black. Specifically pdb.gimp_text_layer_get_color(text_layer) returns RGB (0.0, 0.0, 0.0, 1.0)
I want the font to be in a different color.
I tried
col = gimpcolor.RGB(44.7, 46.7, 58.0)
pdb.gimp_text_layer_set_color(text_layer, col)
Trying pdb.gimp_text_layer_get_color(text_layer) now returns RGB (44.7, 46.7, 58.0, 1.0) but the text in now in white.
How to make the text appear in my desired color or where does gimpcolor object's documentation exist?
The text is created with the current foreground color:
# Two ways to set the color (pick one)
# Color set with the gimp object
gimp.set_foreground(gimpcolor.RGB(0,0,255))
# Color set via PDB API
pdb.gimp_context_set_foreground(gimpcolor.RGB(0,0,255))
text_layer = pdb.gimp_text_fontname(image, None, 100, 100, 'Gimp', 0, True, 80,0, 'Bungee')
Also: careful with number types of the color channels. AFAIK for gimpcolor.RGB(r,g,b):
if the r/g/b argument is a float, it is understood as being in the [0.0 .. 1.0] range (and clamped if outside that range)
if it is an integer, is is understood as being in the [0 .. 255] range
So for instance gimpcolor.RGB(1.,1.,1.) is white (1. = 100%), but gimpcolor.RGB(1,1,1) is nearly black (1/255 = 0.4%).

MTKView Displaying Wide Gamut P3 Colorspace

I'm building a real-time photo editor based on CIFilters and MetalKit. But I'm running into an issue with displaying wide gamut images in a MTKView.
Standard sRGB images display just fine, but Display P3 images are washed out.
I've tried setting the CIContext.render colorspace as the image colorspace, and still experience the issue.
Here are snippets of the code:
guard let inputImage = CIImage(mtlTexture: sourceTexture!) else { return }
let outputImage = imageEditor.processImage(inputImage)
print(colorSpace)
context.render(outputImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: inputImage.extent,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
let pickedImage = info[UIImagePickerControllerOriginalImage] as! UIImage
print(pickedImage.cgImage?.colorSpace)
if let cspace = pickedImage.cgImage?.colorSpace {
colorSpace = cspace
}
I have found a similar issue on the Apple developer forums, but without any answers: https://forums.developer.apple.com/thread/66166
In order to support the wide color gamut, you need to set the colorPixelFormat of your MTKView to either BGRA10_XR or bgra10_XR_sRGB. I suspect the colorSpace property of macOS MTKViews won't be supported on iOS because color management in iOS is not active but targeted (read Best practices for color management).
Without seeing your images and their actual values, it is hard to diagnose, but I'll explain my findings & experiments. I suggest you start like I did, by debugging a single color.
For instance, what's the reddest point in P3 color space? It can be defined through a UIColor like this:
UIColor(displayP3Red: 1, green: 0, blue: 0, alpha: 1)
Add a UIButton to your view with the background set to that color for debugging purposes. You can either get the components in code to see what those values become in sRGB,
var fRed : CGFloat = 0
var fGreen : CGFloat = 0
var fBlue : CGFloat = 0
var fAlpha : CGFloat = 0
let c = UIColor(displayP3Red: 1, green: 0, blue: 0, alpha: 1)
c.getRed(&fRed, green: &fGreen, blue: &fBlue, alpha: &fAlpha)
or you can use the Calculator in macOS Color Sync Utility,
Make sure you select Extended Range, otherwise the values will be clamped to 0 and 1.
So, as you can see, your P3(1, 0, 0) corresponds to (1.0930, -0.2267, -0.1501) in extended sRGB.
Now, back to your MTKView,
If you set the colorPixelFormat of your MTKView to .BGRA10_XR, then you obtain the brightest red if the output of your shader is,
(1.0930, -0.2267, -0.1501)
If you set the colorPixelFormat of your MTKView to .bgra10_XR_sRGB, then you obtain the brightest red if the output of your shader is,
(1.22486, -0.0420312, -0.0196301)
because you have to write a linear RGB value, since this texture format will apply the gamma correction for you. Be careful when applying the inverse gamma, since there are negative values. I use this function,
let f = {(c: Float) -> Float in
if fabs(c) <= 0.04045 {
return c / 12.92
}
return sign(c) * powf((fabs(c) + 0.055) / 1.055, 2.4)
}
The last missing piece is creating a wide gamut UIImage. Set the color space to CGColorSpace.displayP3 and copy the data over. But what data, right? The brightest red in this image will be
(1, 0, 0)
or (65535, 0, 0) in 16-bit ints.
What I do in my code is using .rgba16Unorm textures to manipulate images in displayP3 color space, where (1, 0, 0) will be the brightest red in P3. This way, I can directly copy over its contents to a UIImage. Then, for displaying, I pass a color transform to the shader to convert from P3 to extended sRGB (so, not saturating colors) before displaying. I use linear color, so my transform is just a 3x3 matrix. I set my view to .bgra10_XR_sRGB, so the gamma will be applied automatically for me.
That (column-major) matrix is,
1.2249 -0.2247 0
-0.0420 1.0419 0
-0.0197 -0.0786 1.0979
You can read about how I generated it here: Exploring the display-P3 color space
Here's an example I built using UIButtons and an MTKView, screen-captured on an iPhoneX,
The button on the left is the brightest red on sRGB, while the button on the right is using a displayP3 color. At the center, I placed an MTKView that outputs the transformed linear color as described above.
Same experiment for green,
Now, if you see this on a recent iPhone or iPad, you should see the both the square in the center and the button to the right have the same bright colors. If you see this on a Mac that can't display them, the left button will appear the same color. If you see this in a Windows machine or a browser without proper color management, the left button may also appear to be of a different color, but that's only because the whole image is interpreted as sRGB and obviously those pixels have different values... But the appearance won't be correct.
If you want more references, check the testP3UIColor unit test I added here: ColorTests.swift,
my functions to initialize the UIImage: Image.swift,
and a sample app to try out the conversions: SampleColorPalette
I haven't experimented with CIImages, but I guess the same principles apply.
I hope this information is of some help. It also took me long to figure out how to display colors properly because I couldn't find any explicit reference to displayP3 support in the Metal SDK documentation.

Init Generic RGB in Swift

I am attempting to convert a string containing a color in the Generic RGB color space into UIColor in Swift. For example, a typical string would look like this:
0.121569 0.129412 0.156863 1
Using the color picker in macOS, I discovered that these values are using the Generic RGB color space.
However, when I attempt to convert these values into UIColor, it uses the sRGB color space.
let red = CGFloat((components[0] as NSString).doubleValue)
let green = CGFloat((components[1] as NSString).doubleValue)
let blue = CGFloat((components[2] as NSString).doubleValue)
let alpha = CGFloat((components[3] as NSString).doubleValue)
print(UIColor(red: red, green: green, blue: blue, alpha: alpha))
// Log Result: NSCustomColorSpace sRGB IEC61966-2.1 colorspace 0.121569 0.129412 0.156863 1
Hence, a different color is displayed in my application. I confirmed this by changing the color space in Color Picker to IEC61966-2.1 and it indeed displayed different values:
Any idea how I would convert the Generic RGB values into the correct UIColor values?
EDIT For clarification, I am unable to change the color values in the string into another scheme as I am reading the colors from an external source in an XML file
Color conversion by way of color space is performed at the level of CGColor. Example:
let sp = CGColorSpace(name:CGColorSpace.genericRGBLinear)!
let comps : [CGFloat] = [0.121569, 0.129412, 0.156863, 1]
let c = CGColor(colorSpace: sp, components: comps)!
let sp2 = CGColorSpace(name:CGColorSpace.sRGB)!
let c2 = c.converted(to: sp2, intent: .relativeColorimetric, options: nil)!
let color = UIColor(cgColor: c2)
EDIT I think the premise of your original problem is erroneous. You are trying, it turns out, to use the numbers in an Xcode FontAndColorThemes file. Those numbers are sRGB, not generic RGB.
To prove it, I ran this code:
let sp = CGColorSpace(name:CGColorSpace.sRGB)!
let comps : [CGFloat] = [0.0, 0.456, 0.0, 1]
let c = CGColor(colorSpace: sp, components: comps)!
let v1 = UIView(frame:CGRect(x: 50, y: 50, width: 50, height: 50))
v1.backgroundColor = UIColor(cgColor:c)
self.view.addSubview(v1)
That color is taken from the Default color theme's Comment color. Well, the result is identical to the Comment color, as this screen shot demonstrates:
I get the same answer when I use the "eyedropper" tool as when I simply open the color swatch to read the inspector. And I get the same answer when I use the "eyedropper" tool on Xcode's swatch and on my iOS swatch. This seems to me to prove that these colors were always sRGB.

Adding semi-transparent images as textures in Scenekit

When I add a semi-transparent image (sample) as a texture for a SCNNode, how can I specify a color attribute for the node where the image is transparent. Since I am able to specify either color or image as a material property, I am unable to specify the color value to the node. Is there a way to specify both color and image for the material property or is there a workaround to this problem.
If you are assigning the image to the contents of the transparent material property, you can change the materials transparencyMode to be either .AOne or .RGBZero.
.AOne means that transparency is derived from the images alpha channel.
.RGBZero means that transparency is derived from the luminance (the total red, green, and blue) in the image.
You cannot configure an arbitrary color to be treated as transparency without a custom shader.
However, from the looks of your sample image, I would think that assigning the sample image to the transparent material properties contents and using the .AOne transparency mode would give you the result you are looking for.
I'm posting this as a new answer because it's different from the other answer.
Based on your comment, I understand that you want to want to use an image with transparency as the diffuse content of a material, but use a background color wherever the image is transparent. In other words, you won't to use a composite of the image over a color as the diffuse contents.
Using UIImage
There are a few different ways you can achieve this composited image. The easiest and likely most familiar solution is to create a new UIImage that draws the image over the color. This new image will have the same size and scale as your image, but can be opaque since it has a solid background color.
func imageByComposing(image: UIImage, over color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image.size, true, image.scale)
defer {
UIGraphicsEndImageContext()
}
let imageRect = CGRect(origin: .zero, size: image.size)
// fill with background color
color.set()
UIRectFill(imageRect)
// draw image on top
image.drawInRect(imageRect)
return UIGraphicsGetImageFromCurrentImageContext()
}
Using this image as the contents of the diffuse material property will give you the effect that you're after.
Using Shader Modifiers
If you find yourself having to change the color very frequently (possibly animating it), you could also use custom shaders or shader modifiers to composite the image over the color.
In that case, you want to composite the image A over the color B, so that the output color (CO) is:
CO = CA + CB * (1 - ɑA)
By passing the image as the diffuse contents, and assigning the output to the diffuse content, the expression can be simplified as:
Cdiffuse = Cdiffuse + Ccolor * (1 - ɑdiffuse)
Cdiffuse += Ccolor * (1 - ɑdiffuse)
Generally the output alpha would depend on the alpha of A and B, but since B (the color) is opaque (1), the output alpha is also 1.
This can be written as a small shader modifier. Since the motivation for this solutions was to be able to change the color, the color is created as a uniform variable which can be updated in code.
// Define a color that can be set/changed from code
uniform vec3 backgroundColor;
#pragma body
// Composit A (the image) over B (the color):
// output = image + color * (1-alpha_image)
float alpha = _surface.diffuse.a;
_surface.diffuse.rgb += backgroundColor * (1.0 - alpha);
// make fully opaque (since the color is fully opaque)
_surface.diffuse.a = 1.0;
This shader modifier would then be read from the file, and set in the materials shader modifier dictionary
enum ShaderLoadingError: ErrorType {
case FileNotFound, FailedToLoad
}
func shaderModifier(named shaderName: String, fileExtension: String = "glsl") throws -> String {
guard let url = NSBundle.mainBundle().URLForResource(shaderName, withExtension: fileExtension) else {
throw ShaderLoadingError.FileNotFound
}
do {
return try String(contentsOfURL: url)
} catch {
throw ShaderLoadingError.FailedToLoad
}
}
// later, in the code that configures the material ...
do {
let modifier = try shaderModifier(named: "Composit") // the name of the shader modifier file (assuming 'glsl' file extension)
theMaterial.shaderModifiers = [SCNShaderModifierEntryPointSurface: modifier]
} catch {
// Handle the error here
print(error)
}
You would then be able to change the color by setting a new value for the "backgroundColor" of the material. Note that there is no initial value, so one would have to be set.
let backgroundColor = SCNVector3Make(1.0, 0.0, 0.7) // r, g, b
// Set the color components as an SCNVector3 wrapped in an NSValue
// for the same key as the name of the uniform variable in the sahder modifier
theMaterial.setValue(NSValue(SCNVector3: backgroundColor), forKey: "backgroundColor")
As you can see, the first solution is simpler and the one I would recommend if it suits your needs. The second solution is more complicated, but enabled the background color to be animated.
Just in case someone comes across this in the future... for some tasks, ricksters solution is likely the easiest. In my case, I wanted to display a grid on top of an image that was mapped to a sphere. I originally composited the images into one and applied them, but over time I got more fancy and this started getting complex. So I made two spheres, one inside the other. I put the grid on the inner one and the image on the outer one and presto...
let outSphereGeometry = SCNSphere(radius: 20)
outSphereGeometry.segmentCount = 100
let outSphereMaterial = SCNMaterial()
outSphereMaterial.diffuse.contents = topImage
outSphereMaterial.isDoubleSided = true
outSphereGeometry.materials = [outSphereMaterial]
outSphere = SCNNode(geometry: outSphereGeometry)
outSphere.position = SCNVector3(x: 0, y: 0, z: 0)
let sphereGeometry = SCNSphere(radius: 10)
sphereGeometry.segmentCount = 100
sphereMaterial.diffuse.contents = gridImage
sphereMaterial.isDoubleSided = true
sphereGeometry.materials = [sphereMaterial]
sphere = SCNNode(geometry: sphereGeometry)
sphere.position = SCNVector3(x: 0, y: 0, z: 0)
I was surprised that I didn't need to set sphereMaterial.transparency, it seems to get this automatically.

How to recolor an image using CoreGraphics?

I have a picture in shades of gray. Is it possible to recolor the image in shades of other (arbitrary) color? Something like on the picture below.
I think I can do that by accessing each pixel of the image and changing it as necessary, but I think that there might be a better way.
I would like to do all this using CoreGraphics.
I solved the task using following code:
CGContextSaveGState(bitmapContext);
CGContextSetRGBFillColor(bitmapContext, red, green, blue, alpha);
CGContextTranslateCTM(bitmapContext, 0.0, contextHeight);
CGContextScaleCTM(bitmapContext, 1.0, -1.0);
CGContextDrawImage(bitmapContext, sourceImageRect, sourceImage);
CGContextSetBlendMode(bitmapContext, kCGBlendModeColor);
CGContextFillRect(bitmapContext, sourceImageRect);
CGContextRestoreGState(bitmapContext);
CGImageRef coloredImage = CGBitmapContextCreateImage(bitmapContext);
In this code bitmapContext is context created with CGBitmapContextCreate.

Resources