I'm building a real-time photo editor based on CIFilters and MetalKit. But I'm running into an issue with displaying wide gamut images in a MTKView.
Standard sRGB images display just fine, but Display P3 images are washed out.
I've tried setting the CIContext.render colorspace as the image colorspace, and still experience the issue.
Here are snippets of the code:
guard let inputImage = CIImage(mtlTexture: sourceTexture!) else { return }
let outputImage = imageEditor.processImage(inputImage)
print(colorSpace)
context.render(outputImage,
to: currentDrawable.texture,
commandBuffer: commandBuffer,
bounds: inputImage.extent,
colorSpace: colorSpace)
commandBuffer?.present(currentDrawable)
let pickedImage = info[UIImagePickerControllerOriginalImage] as! UIImage
print(pickedImage.cgImage?.colorSpace)
if let cspace = pickedImage.cgImage?.colorSpace {
colorSpace = cspace
}
I have found a similar issue on the Apple developer forums, but without any answers: https://forums.developer.apple.com/thread/66166
In order to support the wide color gamut, you need to set the colorPixelFormat of your MTKView to either BGRA10_XR or bgra10_XR_sRGB. I suspect the colorSpace property of macOS MTKViews won't be supported on iOS because color management in iOS is not active but targeted (read Best practices for color management).
Without seeing your images and their actual values, it is hard to diagnose, but I'll explain my findings & experiments. I suggest you start like I did, by debugging a single color.
For instance, what's the reddest point in P3 color space? It can be defined through a UIColor like this:
UIColor(displayP3Red: 1, green: 0, blue: 0, alpha: 1)
Add a UIButton to your view with the background set to that color for debugging purposes. You can either get the components in code to see what those values become in sRGB,
var fRed : CGFloat = 0
var fGreen : CGFloat = 0
var fBlue : CGFloat = 0
var fAlpha : CGFloat = 0
let c = UIColor(displayP3Red: 1, green: 0, blue: 0, alpha: 1)
c.getRed(&fRed, green: &fGreen, blue: &fBlue, alpha: &fAlpha)
or you can use the Calculator in macOS Color Sync Utility,
Make sure you select Extended Range, otherwise the values will be clamped to 0 and 1.
So, as you can see, your P3(1, 0, 0) corresponds to (1.0930, -0.2267, -0.1501) in extended sRGB.
Now, back to your MTKView,
If you set the colorPixelFormat of your MTKView to .BGRA10_XR, then you obtain the brightest red if the output of your shader is,
(1.0930, -0.2267, -0.1501)
If you set the colorPixelFormat of your MTKView to .bgra10_XR_sRGB, then you obtain the brightest red if the output of your shader is,
(1.22486, -0.0420312, -0.0196301)
because you have to write a linear RGB value, since this texture format will apply the gamma correction for you. Be careful when applying the inverse gamma, since there are negative values. I use this function,
let f = {(c: Float) -> Float in
if fabs(c) <= 0.04045 {
return c / 12.92
}
return sign(c) * powf((fabs(c) + 0.055) / 1.055, 2.4)
}
The last missing piece is creating a wide gamut UIImage. Set the color space to CGColorSpace.displayP3 and copy the data over. But what data, right? The brightest red in this image will be
(1, 0, 0)
or (65535, 0, 0) in 16-bit ints.
What I do in my code is using .rgba16Unorm textures to manipulate images in displayP3 color space, where (1, 0, 0) will be the brightest red in P3. This way, I can directly copy over its contents to a UIImage. Then, for displaying, I pass a color transform to the shader to convert from P3 to extended sRGB (so, not saturating colors) before displaying. I use linear color, so my transform is just a 3x3 matrix. I set my view to .bgra10_XR_sRGB, so the gamma will be applied automatically for me.
That (column-major) matrix is,
1.2249 -0.2247 0
-0.0420 1.0419 0
-0.0197 -0.0786 1.0979
You can read about how I generated it here: Exploring the display-P3 color space
Here's an example I built using UIButtons and an MTKView, screen-captured on an iPhoneX,
The button on the left is the brightest red on sRGB, while the button on the right is using a displayP3 color. At the center, I placed an MTKView that outputs the transformed linear color as described above.
Same experiment for green,
Now, if you see this on a recent iPhone or iPad, you should see the both the square in the center and the button to the right have the same bright colors. If you see this on a Mac that can't display them, the left button will appear the same color. If you see this in a Windows machine or a browser without proper color management, the left button may also appear to be of a different color, but that's only because the whole image is interpreted as sRGB and obviously those pixels have different values... But the appearance won't be correct.
If you want more references, check the testP3UIColor unit test I added here: ColorTests.swift,
my functions to initialize the UIImage: Image.swift,
and a sample app to try out the conversions: SampleColorPalette
I haven't experimented with CIImages, but I guess the same principles apply.
I hope this information is of some help. It also took me long to figure out how to display colors properly because I couldn't find any explicit reference to displayP3 support in the Metal SDK documentation.
Related
Now I'm really confused.
Heres how the variable gets instantiated :
Utils.redColor = UIColor(red: CGFloat(red) / 255.0, green: CGFloat(green) / 255.0, blue: CGFloat(blue)/255.0, alpha: alpha)
And here I enumerate a Attribute text's attributes to skip the color if it's equals to Utils.redColor :
text.enumerateAttributes(in: NSRange(0..<text.length), options: []) { (attributes, range, _) -> Void in
for (attribute, object) in attributes {
if let textColor = object as? UIColor{
NSLog("textColor = \(textColor) red = \(Utils.redColor!)")
if (!textColor.isEqual(Utils.redColor!)){
//I need to repaint any textColor other than red
text.setAttributes(textAttributes , range: range)
}
}
So, as you see in this code textColor is a UIColor object as well, but the log says:
textColor = kCGColorSpaceModelRGB 0.666667 0.172549 0.172549 1 red = UIExtendedSRGBColorSpace 0.666667 0.172549 0.172549 1
Which are two exact colors , but being instances of two different classes. This is totally confusing for both of them are UIColor class's objects!
This comparison never triggers although it worked well in Swift2
How do I fix it and why this problem ever occurs??
Welcome to the wild and wooly world of wide color and color management.
Your two colors aren't equal, per isEqual (or Swift ==, which runs through isEqual for ObjC classes that have it), because they have different color spaces. (They aren't different classes; the first item in UIColor.description is an identifier for the color space, or where the color space doesn't have a name, the model for the color space — that is, whether it's RGB-based, CMYK-based, grayscale, etc.)
Without a color space to define them as a color, the four component values of a color have no reliable meaning, so isEqual uses both the component values and the color space to test for equality.
Aside on color spaces (skip down for solutions)
Your color created with UIColor init(red:green:blue:alpha:) uses the "Extended sRGB" color space. This color space is designed to support wide color displays (like the P3 color display in iPhone 7, iPad Pro 9.7", iMac late-2015, MacBook Pro late-2016, and probably whatever else comes next), but be component-value compatible with the sRGB color space used on other devices.
For example, sRGB 1.0, 0.0, 0.0 is the "red" you're probably most used to... but if you create a color in the P3 color space with RGB values 1.0, 0.0, 0.0 you get much much redder. If you have an app where you need to support both sRGB and P3 displays and work directly with color components, this can get confusing. So the Extended sRGB space lets the same component values mean the same thing, but also allows colors outside the sRGB gamut to be specified using values outside the 0.0-1.0 range. For example, the reddest that Display P3 can get is expressed in Extended sRGB as (roughly) 1.093, -0.227, -0.15.
As [the docs for that initializer note, for apps linked against the iOS 10 or later SDK, init(red:green:blue:alpha:) creates a color in the Extended sRGB color space, but for older apps (even if they're running on iOS 10) it creates a color in a device-specific RGB space (which you can generally treat as equivalent to sRGB).
Dealing with different color spaces
So, either your color-replacing code or whatever code is creating the colors in your attributed string need to be aware of color spaces. There are a few possible ways to deal with this; pick the one that works best for you:
Make sure both your string-creation code and your color-replacement code are using the same device-independent color space. UIColor doesn't provide a lot of utilities for working with color spaces, so you can either use Display P3 (on iOS 10 and up), or drop down to CGColor:
let sRGB = CGColorSpace(name: CGColorSpace.sRGB)!
let cgDarkRed = CGColor(colorSpace: sRGB, components: [0.666667, 0.172549, 0.172549, 1])!
let darkRed = UIColor(cgColor: cgDarkRed)
// example creating attributed string...
let attrString = NSAttributedString(string: "red", attributes: [NSForegroundColorAttributeName : darkRed])
// example processing text...
let redAttributes = [NSForegroundColorAttributeName: darkRed]
text.enumerateAttributes(in: NSRange(0..<attrString.length)) { (attributes, range, stop) in
for (_, textColor) in attributes where (textColor as? UIColor) != darkRed {
text.setAttributes(redAttributes , range: range)
}
}
If you can't control the input colors, convert them to the same color space before comparing. Here's a UIColor extension to do that:
extension UIColor {
func isEqualWithConversion(_ color: UIColor) -> Bool {
guard let space = self.cgColor.colorSpace
else { return false }
guard let converted = color.cgColor.converted(to: space, intent: .absoluteColorimetric, options: nil)
else { return false }
return self.cgColor == converted
}
}
(Then you can just use this function in place of == or isEqual in your text processing.)
Just get at the raw component values of the colors and compare them directly, based on the assumption that you know the color spaces for both are compatible. Sort of fragile, so I recommend against this option.
In CVPixelBuffer object, have one or many planes. (reference)
We have methods to get number, heigh, the base address of plane.
So what exactly the plane is? And how it constructed inside a CVPixelBuffer?
Sample:
<CVPixelBuffer 0x1465f8b30 width=1280 height=720 pixelFormat=420v iosurface=0x14a000008 planes=2>
<Plane 0 width=1280 height=720 bytesPerRow=1280>
<Plane 1 width=640 height=360 bytesPerRow=1280>
Video formats are an incredibly complex subject.
Some video streams have the pixels stored in bytes RGBA, ARGB, ABGR, or several other variants (with or without an alpha channel)
(In RGBA format, you'd have the red, green, blue, and alpha values of a pixel one right after each other in memory, followed by another set of 4 bytes with the color values of the next pixel, etc.) This is interlaced color information.
Some video streams separate out the color channels so all the red channel, blue, green, and alpha are sent as separate "planes". You'd get a buffer with all the red information, then all the blue data, then all the green, and then alpha, if alpha is included. (Think of color negatives, where there are separate layers of emulsion to capture the different colors. The layers of emulsion are planes of color information. It's the same idea with digital.)
There are formats where the color data is in one or 2 planes, and then the luminance is in a separate plane. That's how old analog color TV works. It started out as black and white (luminance) and then broadcasters added side-band signals to convey the color information. (Chroma)
I don't muck around with CVPixelBuffers often enough to know the gory details of what you are asking, and have to invest large amounts of time and copious amounts of coffee before I can "spin up" my brain enough to grasp those gory details.
Edit:
Since your debug information shows 2 planes, it seems likely that this pixel buffer has a luminance channel and a chroma channel, as mentioned in #zeh's answer.
Although the existing and accepted answer is rich of important information when dealing with CVPixelBuffers, in this particular case the answer is wrong. The two planes that the question refers to are the luminance and chrominance planes
Luminance refers to brightness and chrominance refers to color - From Quora
The following code snippet from Apple makes it more clear:
let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
let lumaWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0)
let lumaHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0)
let lumaRowBytes = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
var sourceLumaBuffer = vImage_Buffer(data: lumaBaseAddress,
height: vImagePixelCount(lumaHeight),
width: vImagePixelCount(lumaWidth),
rowBytes: lumaRowBytes)
let chromaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
let chromaWidth = CVPixelBufferGetWidthOfPlane(pixelBuffer, 1)
let chromaHeight = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1)
let chromaRowBytes = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1)
var sourceChromaBuffer = vImage_Buffer(data: chromaBaseAddress,
height: vImagePixelCount(chromaHeight),
width: vImagePixelCount(chromaWidth),
rowBytes: chromaRowBytes)
See full reference here.
I am trying to draw a circle shaded with a gradient from white to transparent. I am using Core Graphics.
Here is what I have which draws a gradient from white to black:
let colorSpace = CGColorSpaceCreateDeviceRGB();
let colors = [UIColor.white.cgColor, UIColor.black.cgColor] as CFArray;
let locations : [CGFloat] = [0.0, 1.0];
let glowGradient : CGGradient = CGGradient.init(colorsSpace: colorSpace, colors: colors, locations: locations)!;
let ctx = UIGraphicsGetCurrentContext()!;
ctx.drawRadialGradient(glowGradient, startCenter: rectCenter, startRadius: 0, endCenter: rectCenter, endRadius: imageWidthPts/2, options: []);
However, I do not want to draw white-to-black; I want to draw white-to-transparent.
To do so, I first tried changing the end color to UIColor.white.cgColor.copy(alpha: 0.0) (i.e., transparent white). However, this failed with:
fatal error: unexpectedly found nil while unwrapping an Optional value
I assume this error is due to the color being outside the specified RGB color space (CGColorSpaceCreateDeviceRGB()).
The fix would seem to be to change the specified color space to one with an alpha component, such as RGBA. However, such color spaces do not appear to exist! There are only CGColorSpaceCreateDeviceRGB, CGColorSpaceCreateDeviceCMYK, and CGColorSpaceCreateDeviceGray.
But it makes no sense for there to be no available color spaces with an alpha component. The documentation explicitly describes support for alpha in gradients. The documentation for CGGradient.init says:
For example, if the color space is an RGBA color space and you want to use two colors in the gradient (one for a starting location and another for an ending location), then you need to provide 8 values in components—red, green, blue, and alpha values for the first color, followed by red, green, blue, and alpha values for the second color.
This RGBA encoding makes perfect sense, but it's impossible to tell Core Graphics that I'm using such an RGBA encoding, because there is no RGBA color space!
Where is the CGColorSpace for RGBA?
You don't need the RGBA color space to draw transparent radial/linear gradients. The RGB color space is enough. If it's not drawing it transparently, you probably have the background color of the view or the context misconfigured.
If you're creating a context you want to make sure that you pass in false for opaque: Iphone How to make context background transparent?
If you're using a CALayer on a UIView, you need to make sure that the UIView's background color is set to UIColor.clear. If it's set to nil, you'll end up with the gradient blending with black instead.
A slightly unsatisfactory answer: you can get the color space of a context with .colorSpace. In my case, it seems to give me an RGBA space, but I can't see any guarantee of this.
Here's the gradient using .colorSpace:
let ctx = UIGraphicsGetCurrentContext()!;
let colorSpace = ctx.colorSpace!;
let colorComponents : [CGFloat] = [
// R G B A
1.0, 1.0, 1.0, 1.0,
1.0, 1.0, 1.0, 0.0,
];
let locations : [CGFloat] = [0.0, 1.0];
let glowGradient : CGGradient = CGGradient.init(
colorSpace: colorSpace,
colorComponents: colorComponents,
locations: locations,
count: locations.count
)!;
ctx.drawRadialGradient(glowGradient, startCenter: rectCenter, startRadius: 0, endCenter: rectCenter, endRadius: imageWidthPts/2, options: []);
It's particularly confusing that in my case, the colorSpace.numberOfComponents evaluates to 3, i.e. not RGBA, and yet it still correctly interprets the alpha component in the gradient. ¯\_(ツ)_/¯
When I add a semi-transparent image (sample) as a texture for a SCNNode, how can I specify a color attribute for the node where the image is transparent. Since I am able to specify either color or image as a material property, I am unable to specify the color value to the node. Is there a way to specify both color and image for the material property or is there a workaround to this problem.
If you are assigning the image to the contents of the transparent material property, you can change the materials transparencyMode to be either .AOne or .RGBZero.
.AOne means that transparency is derived from the images alpha channel.
.RGBZero means that transparency is derived from the luminance (the total red, green, and blue) in the image.
You cannot configure an arbitrary color to be treated as transparency without a custom shader.
However, from the looks of your sample image, I would think that assigning the sample image to the transparent material properties contents and using the .AOne transparency mode would give you the result you are looking for.
I'm posting this as a new answer because it's different from the other answer.
Based on your comment, I understand that you want to want to use an image with transparency as the diffuse content of a material, but use a background color wherever the image is transparent. In other words, you won't to use a composite of the image over a color as the diffuse contents.
Using UIImage
There are a few different ways you can achieve this composited image. The easiest and likely most familiar solution is to create a new UIImage that draws the image over the color. This new image will have the same size and scale as your image, but can be opaque since it has a solid background color.
func imageByComposing(image: UIImage, over color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image.size, true, image.scale)
defer {
UIGraphicsEndImageContext()
}
let imageRect = CGRect(origin: .zero, size: image.size)
// fill with background color
color.set()
UIRectFill(imageRect)
// draw image on top
image.drawInRect(imageRect)
return UIGraphicsGetImageFromCurrentImageContext()
}
Using this image as the contents of the diffuse material property will give you the effect that you're after.
Using Shader Modifiers
If you find yourself having to change the color very frequently (possibly animating it), you could also use custom shaders or shader modifiers to composite the image over the color.
In that case, you want to composite the image A over the color B, so that the output color (CO) is:
CO = CA + CB * (1 - ɑA)
By passing the image as the diffuse contents, and assigning the output to the diffuse content, the expression can be simplified as:
Cdiffuse = Cdiffuse + Ccolor * (1 - ɑdiffuse)
Cdiffuse += Ccolor * (1 - ɑdiffuse)
Generally the output alpha would depend on the alpha of A and B, but since B (the color) is opaque (1), the output alpha is also 1.
This can be written as a small shader modifier. Since the motivation for this solutions was to be able to change the color, the color is created as a uniform variable which can be updated in code.
// Define a color that can be set/changed from code
uniform vec3 backgroundColor;
#pragma body
// Composit A (the image) over B (the color):
// output = image + color * (1-alpha_image)
float alpha = _surface.diffuse.a;
_surface.diffuse.rgb += backgroundColor * (1.0 - alpha);
// make fully opaque (since the color is fully opaque)
_surface.diffuse.a = 1.0;
This shader modifier would then be read from the file, and set in the materials shader modifier dictionary
enum ShaderLoadingError: ErrorType {
case FileNotFound, FailedToLoad
}
func shaderModifier(named shaderName: String, fileExtension: String = "glsl") throws -> String {
guard let url = NSBundle.mainBundle().URLForResource(shaderName, withExtension: fileExtension) else {
throw ShaderLoadingError.FileNotFound
}
do {
return try String(contentsOfURL: url)
} catch {
throw ShaderLoadingError.FailedToLoad
}
}
// later, in the code that configures the material ...
do {
let modifier = try shaderModifier(named: "Composit") // the name of the shader modifier file (assuming 'glsl' file extension)
theMaterial.shaderModifiers = [SCNShaderModifierEntryPointSurface: modifier]
} catch {
// Handle the error here
print(error)
}
You would then be able to change the color by setting a new value for the "backgroundColor" of the material. Note that there is no initial value, so one would have to be set.
let backgroundColor = SCNVector3Make(1.0, 0.0, 0.7) // r, g, b
// Set the color components as an SCNVector3 wrapped in an NSValue
// for the same key as the name of the uniform variable in the sahder modifier
theMaterial.setValue(NSValue(SCNVector3: backgroundColor), forKey: "backgroundColor")
As you can see, the first solution is simpler and the one I would recommend if it suits your needs. The second solution is more complicated, but enabled the background color to be animated.
Just in case someone comes across this in the future... for some tasks, ricksters solution is likely the easiest. In my case, I wanted to display a grid on top of an image that was mapped to a sphere. I originally composited the images into one and applied them, but over time I got more fancy and this started getting complex. So I made two spheres, one inside the other. I put the grid on the inner one and the image on the outer one and presto...
let outSphereGeometry = SCNSphere(radius: 20)
outSphereGeometry.segmentCount = 100
let outSphereMaterial = SCNMaterial()
outSphereMaterial.diffuse.contents = topImage
outSphereMaterial.isDoubleSided = true
outSphereGeometry.materials = [outSphereMaterial]
outSphere = SCNNode(geometry: outSphereGeometry)
outSphere.position = SCNVector3(x: 0, y: 0, z: 0)
let sphereGeometry = SCNSphere(radius: 10)
sphereGeometry.segmentCount = 100
sphereMaterial.diffuse.contents = gridImage
sphereMaterial.isDoubleSided = true
sphereGeometry.materials = [sphereMaterial]
sphere = SCNNode(geometry: sphereGeometry)
sphere.position = SCNVector3(x: 0, y: 0, z: 0)
I was surprised that I didn't need to set sphereMaterial.transparency, it seems to get this automatically.
Apple has made it very simple to to make linear and radial gradients, but is it possible to have the color of the gradient be set by a definable function? In my situation I want to make the fill color of an object to vary with a sinus function along the x-axis. It is not hard to make pngs and use them as patterns instead, but I just wonder if it is possible to make gradients where the red, green and blue components vary along certain axis with a sinus function instead.
Any answer is appreciated. Thanks in advance.
When you create the gradient using the CAGradientLayer class, you can use the colors property with a large number of colors and vary the color components according to the sine function. There will be linear interpolation between each pair of consecutive colors which will, however, not be noticeable when the number of colors (and locations) is large enough.
Here is an example that draws a sine gradient that variates back and forth between red and blue.
// Compute the colors using a sine step-size
let samples = 100
var colors = [CGColor]()
for i in 0..<samples {
let component = CGFloat(0.5 + sin(Double(i) / Double(samples - 1) * 4.0 * M_PI) / 2.0)
colors.append(UIColor(red: component, green: 0, blue: 1 - component, alpha: 1).CGColor)
}
// Create the gradient layer
let gradientLayer: CAGradientLayer = CAGradientLayer()
gradientLayer.colors = colors
// Install the gradient layer
gradientLayer.frame = self.view.bounds
self.view.layer.insertSublayer(gradientLayer, atIndex: 0)
The end result looks like this.