SceneKit, flip direction of SCNMaterial - ios

extremely new to SceneKit, so just looking for help here:
I have an SCNSphere with a camera at the center of it
I create an SCNMaterial, doubleSided, and assign it to the sphere
Since the camera is at the center, the image looks flipped vertically, which when having text inside totally messes things up.
So how can i flip the material, or the image (although later it will be frames from a video), any other suggestion is welcome.
This solution, btw, is failing on me, normalImage is applied as a material (but the image is flipped when looking from inside the sphere), but assigning flippedImage results in no material whatsoever (white screen)
let normalImage = UIImage(named: "text2.png")
let ciimage = CIImage(CGImage: normalImage!.CGImage!)
let flippeCIImage = ciimage.imageByApplyingTransform(CGAffineTransformMakeScale(-1, 1))
let flippedImage = UIImage(CIImage: flippeCIImage, scale: 1.0, orientation: .Left)
sceneMaterial.diffuse.contents = flippedImage
sceneMaterial.specular.contents = UIColor.whiteColor()
sceneMaterial.doubleSided = true
sceneMaterial.shininess = 0.5

Instead of scaling the node (which may break your lighting) you can flip the mapping using SCNMaterialProperty's contentsTransform property:
material.diffuse.contentsTransform = SCNMatrix4MakeScale(1,-1,1)
material.diffuse.wrapT = SCNWrapModeRepeat // or translate contentsTransform by (0,1,0)

To flip the image horizontally:
material.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(-1, 1, 1), 1, 0, 0)
to flip it vertically:
material.diffuse.contentsTransform = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0)

this worked for me, flipping the normal of the geometry by scaling the node it's attached to:
sphereNode.scale = SCNVector3Make(-1, 1, 1)

The accepted answer will not work, guaranteed. Following is how to flip an image that is assigned as the value of the [material].diffuse.contents property; it assumes that two cubes in the scene, side-by-side:
// Define the matrices that perform the two orientation variants
SCNMatrix4 flip_horizontal;
flip_horizontal = SCNMatrix4Translate(SCNMatrix4MakeScale(-1, 1, 1), 1, 0, 0);
SCNMatrix4 flip_vertical;
flip_vertical = SCNMatrix4Translate(SCNMatrix4MakeScale(1, -1, 1), 0, 1, 0);
// Create the material objects for each cube, and assign an image as the contents
self.source_material = [SCNMaterial material];
self.source_material.diffuse.contents = [UIImage imageNamed:#"towelface.png"];
self.mirror_material = [SCNMaterial material];
self.mirror_material.diffuse.contents = self.source_material.diffuse.contents;
Pick only one of the following sections (as defined by the comments):
// PortraitOpposingDown
[self.mirror_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_vertical)];
[self.source_material.diffuse setContentsTransform:SCNMatrix4Mult(self.mirror_material.diffuse.contentsTransform, flip_horizontal)];
// PortraitFacingDown
[self.source_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_vertical)];
[self.mirror_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_horizontal)];
// PortraitOpposingUp
[self.source_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_horizontal)];
// PortraitFacingUp
[self.mirror_material.diffuse setContentsTransform:SCNMatrix4Mult(self.source_material.diffuse.contentsTransform, flip_horizontal)];
Insert the material at the desired index:
[cube[0] insertMaterial:self.source_material atIndex:0];
[cube[1] insertMaterial:self.mirror_material atIndex:0];
By the way, to insert a new image (such as for live video), simply replace the material at the index specified by the insertMaterial:atIndex method; do not reorient the contentsTransform. The following code shows you how; it assumes that your video camera is configured to output a sample buffer for each frame it captures to an AVCaptureVideoDataOutputSampleBufferDelegate, and the requisite code (CreateCGImageFromCVPixelBuffer) to create a CGImage from a CVPixelBuffer:
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CGImageRef cgImage;
CreateCGImageFromCVPixelBuffer(pixelBuffer, &cgImage);
UIImage *image = [UIImage imageWithCGImage:cgImage];
dispatch_async(dispatch_get_main_queue(), ^{
self.source_material.diffuse.contents = image;
[cube[0] replaceMaterialAtIndex:0 withMaterial:self.source_material];
self.mirror_material.diffuse.contents = self.source_material.diffuse.contents;
[cube[1] replaceMaterialAtIndex:0 withMaterial:self.mirror_material];
});
CGImageRelease(cgImage);
}
If you'd like actual code instead of my assumptions of code on your end, please ask. Here's a short video showing this code in action, but with live video instead of a static image.

Related

3D viewer for iOS using MetalKit and Swift - Depth doesn’t work

I’m using Metal with Swift to build a 3D viewer for iOS and I have some issues to make the depth working. From now, I can draw and render a single shape correctly in 3D (like a simple square plane (4 triangles (2 for each face)) or a tetrahedron (4 triangles)).
However, when I try to draw 2 shapes together, the depth between these two shapes doesn’t work. For example, a plane is placed at Z axes = 0 behind a tetra which is placed at Z > 0. If I look a this scene from the back (camera placed somewhere at Z < 0), it’s ok. But when I look at this scene from the front (camera placed somewhere at Z > 0), it doesn’t work. The plane is drawn before the tetra even if it is placed behind the tetra.
I think that the plane is always drawn on the screen before the tetra (no matter the position of the camera) because the call of drawPrimitives for the plane is done before the call for the tetra. However, I was thinking that all the depth and stencil settings will deal with that properly.
I don’t know if the depth isn’t working because depth texture, stencil state and so on are not correctly set or because each shape is drawn in a different call of drawPrimitives.
In other words, do I have to draw all shapes in the same call of drawPrimitives to make the depth working ? The idea of this multiple call to drawPrimitives is to deal with different kinds of primitive type for each shape (triangle or line or …).
This is how I set the depth stencil state and the depth texture and the render pipeline :
init() {
// some miscellaneous initialisation …
// …
// all MTL stuff :
commandQueue = device.makeCommandQueue()
// Stencil descriptor
let depthStencilDescriptor = MTLDepthStencilDescriptor()
depthStencilDescriptor.depthCompareFunction = .less
depthStencilDescriptor.isDepthWriteEnabled = true
depthStencilState = device.makeDepthStencilState(descriptor: depthStencilDescriptor)!
// Library and pipeline descriptor & state
let library = try! device.makeLibrary(source: shaders, options: nil)
// Our vertex function name
let vertexFunction = library.makeFunction(name: "basic_vertex_function")
// Our fragment function name
let fragmentFunction = library.makeFunction(name: "basic_fragment_function")
// Create basic descriptor
let renderPipelineDescriptor = MTLRenderPipelineDescriptor()
// Attach the pixel format that si the same as the MetalView
renderPipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
renderPipelineDescriptor.depthAttachmentPixelFormat = .depth32Float_stencil8
renderPipelineDescriptor.stencilAttachmentPixelFormat = .depth32Float_stencil8
//renderPipelineDescriptor.stencilAttachmentPixelFormat = .stencil8
// Attach the shader functions
renderPipelineDescriptor.vertexFunction = vertexFunction
renderPipelineDescriptor.fragmentFunction = fragmentFunction
// Try to update the state of the renderPipeline
do {
renderPipelineState = try device.makeRenderPipelineState(descriptor: renderPipelineDescriptor)
} catch {
print(error.localizedDescription)
}
// Depth Texture
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .stencil8, width: 576, height: 723, mipmapped: false)
desc.storageMode = .private
desc.usage = .pixelFormatView
depthTexture = device.makeTexture(descriptor: desc)!
// Uniforms buffer
modelMatrix = Matrix4()
modelMatrix.multiplyLeft(worldMatrix)
uniformBuffer = device.makeBuffer( length: MemoryLayout<Float>.stride*16*2, options: [])
let bufferPointer = uniformBuffer.contents()
memcpy(bufferPointer, &modelMatrix.matrix.m, MemoryLayout<Float>.stride * 16)
memcpy(bufferPointer + MemoryLayout<Float>.stride * 16, &projectionMatrix.matrix.m, MemoryLayout<Float>.stride * 16)
}
And the draw function :
function draw(in view: MTKView) {
// create render pass descriptor
guard let drawable = view.currentDrawable,
let renderPassDescriptor = view.currentRenderPassDescriptor else {
return
}
renderPassDescriptor.depthAttachment.texture = depthTexture
renderPassDescriptor.depthAttachment.clearDepth = 1.0
//renderPassDescriptor.depthAttachment.loadAction = .load
renderPassDescriptor.depthAttachment.loadAction = .clear
renderPassDescriptor.depthAttachment.storeAction = .store
// Create a buffer from the commandQueue
let commandBuffer = commandQueue.makeCommandBuffer()
let commandEncoder = commandBuffer?.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
commandEncoder?.setRenderPipelineState(renderPipelineState)
commandEncoder?.setFrontFacing(.counterClockwise)
commandEncoder?.setCullMode(.back)
commandEncoder?.setDepthStencilState(depthStencilState)
// Draw all obj in objects
// objects = array of Object; each object describing vertices and primitive type of a shape
// objects[0] = Plane, objects[1] = Tetra
for obj in objects {
createVertexBuffers(device: view.device!, vertices: obj.vertices)
commandEncoder?.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
commandEncoder?.setVertexBuffer(uniformBuffer, offset: 0, index: 1)
commandEncoder?.drawPrimitives(type: obj.primitive, vertexStart: 0, vertexCount: obj.vertices.count)
}
commandEncoder?.endEncoding()
commandBuffer?.present(drawable)
commandBuffer?.commit()
}
Does anyone has an idea of what is wrong or missing ?
Any advice is welcome !
Edited 09/23/2022: Code updated
Few things of the top of my head:
First
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .depth32Float_stencil8, width: 576, height: 723, mipmapped: false)
Second
renderPipelineDescriptor.depthAttachmentPixelFormat = .depth32Float_stencil8
Notice the pixeFormat should be same in both places, and since you seem to be using stencil test as well so depth32Float_stencil8 will be perfect.
Third
Now another thing you seem to be missing is, clearing depth texture before every render pass, am I right?
So, you should set load action of depth attachment to .clear, like this:
renderPassDescriptor.depthAttachment.loadAction = .clear
Fourth (Subjective to your usecase)*
If none of the above works, you might need to discard framents with alpha = 0 in your fragment function by calling discard_fragment() when color you are returning has alpha 0
Also note for future:
Ideally you want depth texture to be fresh and empty when every new frame starts getting rendered (first draw call of a render pass) and then reuse it for subsequent draw calls in same render pass by setting load action .load and store action .store.
ex: Assuming you have 3 draw calls, say drawing polygons wiz triangle, rectangle, sphere in one frame, then your depth attachment setup should be like this:
Frame 1 Starts:
First Draw: triangle
loadAction: Clear
storeAction: Store
Second Draw: rectangle
loadAction: load
storeAction: Store
Third Draw: sphere
loadAction: load
storeAction: store/dontcare
Frame 2 Starts: Notice you clear depth buffer for 1st draw call of new frame
First Draw: triangle
loadAction: Clear
storeAction: Store
Second Draw: rectangle
loadAction: load
storeAction: Store
Third Draw: sphere
loadAction: load
storeAction: store/dontcare
Your depth texture pixel format is not correct, try to change its pixel format to: MTLPixelFormatDepth32Float or MTLPixelFormatDepth32Float_Stencil8.

Off Screen Rendering

In off screen rendering in metal
let textureDescriptors = MTLTextureDescriptor()
textureDescriptors.textureType = MTLTextureType.type2D
let screenRatio = UIScreen.main.scale
textureDescriptors.width = Int((DrawingManager.shared.size?.width)!) * Int(screenRatio)
textureDescriptors.height = Int((DrawingManager.shared.size?.height)!) * Int(screenRatio)
textureDescriptors.pixelFormat = .bgra8Unorm
textureDescriptors.storageMode = .shared
textureDescriptors.usage = [.renderTarget, .shaderRead]
ssTexture = device.makeTexture(descriptor: textureDescriptors)
ssTexture.label = "ssTexture"
Here the texture is in Clear color. Is it possible to load a image texture and is it posible to render the image texture in Draw Method
let renderPass = MTLRenderPassDescriptor()
renderPass.colorAttachments[0].loadAction = .clear
renderPass.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
renderPass.colorAttachments[0].texture = ssTexture
renderPass.colorAttachments[0].storeAction = .store
I'm not sure what you're asking.
There's MTLTextureLoader for creating textures initialized with the contents of an image.
You can use the replace(region:...) methods of MTLTexture to fill all or part of a texture with image data.
You can use MTLBlitCommandEncoder to copy data from one texture to (all or part of) another or from a buffer to a texture.
You can draw to a texture or write to it from a compute shader.
It's a general-purpose API. There are many ways to do the things you seem to be asking. What have you tried? In what way did those attempts fail to achieve what you want?

Adding semi-transparent images as textures in Scenekit

When I add a semi-transparent image (sample) as a texture for a SCNNode, how can I specify a color attribute for the node where the image is transparent. Since I am able to specify either color or image as a material property, I am unable to specify the color value to the node. Is there a way to specify both color and image for the material property or is there a workaround to this problem.
If you are assigning the image to the contents of the transparent material property, you can change the materials transparencyMode to be either .AOne or .RGBZero.
.AOne means that transparency is derived from the images alpha channel.
.RGBZero means that transparency is derived from the luminance (the total red, green, and blue) in the image.
You cannot configure an arbitrary color to be treated as transparency without a custom shader.
However, from the looks of your sample image, I would think that assigning the sample image to the transparent material properties contents and using the .AOne transparency mode would give you the result you are looking for.
I'm posting this as a new answer because it's different from the other answer.
Based on your comment, I understand that you want to want to use an image with transparency as the diffuse content of a material, but use a background color wherever the image is transparent. In other words, you won't to use a composite of the image over a color as the diffuse contents.
Using UIImage
There are a few different ways you can achieve this composited image. The easiest and likely most familiar solution is to create a new UIImage that draws the image over the color. This new image will have the same size and scale as your image, but can be opaque since it has a solid background color.
func imageByComposing(image: UIImage, over color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image.size, true, image.scale)
defer {
UIGraphicsEndImageContext()
}
let imageRect = CGRect(origin: .zero, size: image.size)
// fill with background color
color.set()
UIRectFill(imageRect)
// draw image on top
image.drawInRect(imageRect)
return UIGraphicsGetImageFromCurrentImageContext()
}
Using this image as the contents of the diffuse material property will give you the effect that you're after.
Using Shader Modifiers
If you find yourself having to change the color very frequently (possibly animating it), you could also use custom shaders or shader modifiers to composite the image over the color.
In that case, you want to composite the image A over the color B, so that the output color (CO) is:
CO = CA + CB * (1 - ɑA)
By passing the image as the diffuse contents, and assigning the output to the diffuse content, the expression can be simplified as:
Cdiffuse = Cdiffuse + Ccolor * (1 - ɑdiffuse)
Cdiffuse += Ccolor * (1 - ɑdiffuse)
Generally the output alpha would depend on the alpha of A and B, but since B (the color) is opaque (1), the output alpha is also 1.
This can be written as a small shader modifier. Since the motivation for this solutions was to be able to change the color, the color is created as a uniform variable which can be updated in code.
// Define a color that can be set/changed from code
uniform vec3 backgroundColor;
#pragma body
// Composit A (the image) over B (the color):
// output = image + color * (1-alpha_image)
float alpha = _surface.diffuse.a;
_surface.diffuse.rgb += backgroundColor * (1.0 - alpha);
// make fully opaque (since the color is fully opaque)
_surface.diffuse.a = 1.0;
This shader modifier would then be read from the file, and set in the materials shader modifier dictionary
enum ShaderLoadingError: ErrorType {
case FileNotFound, FailedToLoad
}
func shaderModifier(named shaderName: String, fileExtension: String = "glsl") throws -> String {
guard let url = NSBundle.mainBundle().URLForResource(shaderName, withExtension: fileExtension) else {
throw ShaderLoadingError.FileNotFound
}
do {
return try String(contentsOfURL: url)
} catch {
throw ShaderLoadingError.FailedToLoad
}
}
// later, in the code that configures the material ...
do {
let modifier = try shaderModifier(named: "Composit") // the name of the shader modifier file (assuming 'glsl' file extension)
theMaterial.shaderModifiers = [SCNShaderModifierEntryPointSurface: modifier]
} catch {
// Handle the error here
print(error)
}
You would then be able to change the color by setting a new value for the "backgroundColor" of the material. Note that there is no initial value, so one would have to be set.
let backgroundColor = SCNVector3Make(1.0, 0.0, 0.7) // r, g, b
// Set the color components as an SCNVector3 wrapped in an NSValue
// for the same key as the name of the uniform variable in the sahder modifier
theMaterial.setValue(NSValue(SCNVector3: backgroundColor), forKey: "backgroundColor")
As you can see, the first solution is simpler and the one I would recommend if it suits your needs. The second solution is more complicated, but enabled the background color to be animated.
Just in case someone comes across this in the future... for some tasks, ricksters solution is likely the easiest. In my case, I wanted to display a grid on top of an image that was mapped to a sphere. I originally composited the images into one and applied them, but over time I got more fancy and this started getting complex. So I made two spheres, one inside the other. I put the grid on the inner one and the image on the outer one and presto...
let outSphereGeometry = SCNSphere(radius: 20)
outSphereGeometry.segmentCount = 100
let outSphereMaterial = SCNMaterial()
outSphereMaterial.diffuse.contents = topImage
outSphereMaterial.isDoubleSided = true
outSphereGeometry.materials = [outSphereMaterial]
outSphere = SCNNode(geometry: outSphereGeometry)
outSphere.position = SCNVector3(x: 0, y: 0, z: 0)
let sphereGeometry = SCNSphere(radius: 10)
sphereGeometry.segmentCount = 100
sphereMaterial.diffuse.contents = gridImage
sphereMaterial.isDoubleSided = true
sphereGeometry.materials = [sphereMaterial]
sphere = SCNNode(geometry: sphereGeometry)
sphere.position = SCNVector3(x: 0, y: 0, z: 0)
I was surprised that I didn't need to set sphereMaterial.transparency, it seems to get this automatically.

How to add transparency with a shader in SceneKit?

I would like to have a transparency effect from an image, for now I just test with a torus, but the shader does not seem to work with alpha. From what I understood from this thread (Using Blending Functions in Scenekit) and this wiki link about transparency : (http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Transparency), GLBlendFunc is replaced by pragma transparency in SceneKit.
Would you know what is wrong with this code?
I created a new project with SceneKit, and I changed the ship mesh for a torus.
EDIT :
I am trying with a plane, but the image below does not appear inside the plane, instead I get the image with the red and brownish boxes below.
My image with alpha :
The result (the image with alpha should replace the brownish color) :
let plane = SCNPlane(width: 2, height: 2)
var texture = SKTexture(imageNamed:"small")
texture.filteringMode = SKTextureFilteringMode.Nearest
plane.firstMaterial?.diffuse.contents = texture
let ship = SCNNode(geometry: plane) //SCNTorus(ringRadius: 1, pipeRadius: 0.5)
ship.position = SCNVector3(x: 0, y: 0, z: 15)
scene.rootNode.addChildNode(ship)
let myscale : CGFloat = 10
let box = SCNBox(width: myscale, height: myscale, length: myscale, chamferRadius: 0)
box.firstMaterial?.diffuse.contents = UIColor.redColor()
let theBox = SCNNode(geometry: box)
theBox.position = SCNVector3(x: 0, y: 0, z: 5)
scene.rootNode.addChildNode(theBox)
let scnView = self.view as SCNView
scnView.scene = scene
scnView.backgroundColor = UIColor.blackColor()
var shaders = NSMutableDictionary()
shaders[SCNShaderModifierEntryPointFragment] = String(contentsOfFile: NSBundle.mainBundle().pathForResource("test", ofType: "shader")!, encoding: NSUTF8StringEncoding, error: nil)
var material = SCNMaterial()
material.shaderModifiers = shaders
ship.geometry?.materials = [material]
The shader :
#pragma transparent
#pragma body
_output.color.rgba = vec4(0.0, 0.2, 0.0, 0.2);
SceneKit uses premultiplied alpha (r, g and b fields should be multiplied by the desired a) :
vec4(0.0, 0.2, 0.0, 0.2); // `vec4(0.0, 1.0, 0.0, 1.0) * alpha` with alpha = 0.2
I was struggling with this problem too. Finally I found out that to make '#pragma transparent' work, I had to add it to another shader other than the one executing my transparency code.
For example, I added transparency code to the surface shader, and added '#pragma transparent' to the geometry shader. The Apple API document also added '#pragma transparent' to the geometry shader, don't know if they were intended to do so.
NSString *geometryScript = #""
"#pragma transparent";
NSString *surfaceScript = #""
//"#pragma transparent" // You must not put it together with the transparency code
"float a = 0.1;"
"_surface.diffuse = vec4(_surface.diffuse.rgb * a, a);";
// This works for the transparency code in surface shader too.
//NSString *fragmentScript = #""
//"#pragma transparent";
yourMaterial.shaderModifiers = #{SCNShaderModifierEntryPointGeometry:geometryScript,
SCNShaderModifierEntryPointSurface:surfaceScript};
This code works in iOS 11.2, Xcode 9.2.
This rule applies to SCNShaderModifierEntryPointFragment shader as well. Likewise, if you want to change transparency there, you can add '#pragma transparent' to the geometry shader or the surface shader. I haven't tested SCNShaderModifierEntryPointLightingModel shader.
If you don't add any '#pragma transparent' to a shader, a black background may be blended with the transparent pixels.
Adding transparency can be quite easily done in the SCNShadable Surface or Fragment entry point
The SCNShaderModifierEntryPointSurface entry point version
#pragma transparent
#pragma body
_surface.diffuse.a = 0.5;
The SCNShaderModifierEntryPointFragment entry point version
#pragma transparent
#pragma body
_output.color.a = 0.5;

Metal MTLTexture replaces semi-transparent areas with black when alpha values that aren't 1 or 0

While using Apple's texture importer, or my own, a white soft-edged circle drawn in software (with a transparent bg) or in Photoshop (saved as a PNG) when rendered will have its semi-transparent colors replaced with black when brought into Metal.
Below is a screen grab from Xcode's Metal debugger, you can see the texture before being sent to shaders.
Image located here (I'm not high ranked enough to embed)
In Xcode, finder, and when put into an UIImageView, the source texture does not have the ring. But somewhere along the UIImage -> CGContex -> MTLTexture process (I'm thinking specifically the MTLTexture part) the tranparent sections are darkened.
I've been banging my head against the wall changing everything I could for the past couple of days but I can't figure it out.
To be transparent (ha), here is my personal import code
import UIKit
import CoreGraphics
class MetalTexture {
class func imageToTexture(imageNamed: String, device: MTLDevice) -> MTLTexture {
let bytesPerPixel = 4
let bitsPerComponent = 8
var image = UIImage(named: imageNamed)!
let width = Int(image.size.width)
let height = Int(image.size.height)
let bounds = CGRectMake(0, 0, CGFloat(width), CGFloat(height))
var rowBytes = width * bytesPerPixel
var colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, rowBytes, colorSpace, CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.rawValue))
CGContextClearRect(context, bounds)
CGContextTranslateCTM(context, CGFloat(width), CGFloat(height))
CGContextScaleCTM(context, -1.0, -1.0)
CGContextDrawImage(context, bounds, image.CGImage)
var texDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(.RGBA8Unorm, width: width, height: height, mipmapped: false)
var texture = device.newTextureWithDescriptor(texDescriptor)
texture.label = imageNamed
var pixelsData = CGBitmapContextGetData(context)
var region = MTLRegionMake2D(0, 0, width, height)
texture.replaceRegion(region, mipmapLevel: 0, withBytes: pixelsData, bytesPerRow: rowBytes)
return texture
}
}
But I don't think that's the problem (since it's a copy of Apple's in Swift, and I've used theirs instead with no differences).
Any leads at all would be super helpful.
Since you are using a CGContext that is configured to use colors premultiplied with the alpha channel, using the standard RGB = src.rgb * src.a + dst.rgb * (1-src.a) will cause darkened areas because the src.rgb values are premultiplied with src.a already. This means that what you want is src.rgb + dst.rgb * (1-src.a) which is configured like this:
pipeline.colorAttachments[0].isBlendingEnabled = true
pipeline.colorAttachments[0].rgbBlendOperation = .add
pipeline.colorAttachments[0].alphaBlendOperation = .add
pipeline.colorAttachments[0].sourceRGBBlendFactor = .one
pipeline.colorAttachments[0].sourceAlphaBlendFactor = .sourceAlpha
pipeline.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipeline.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
The .one means to leave RGB as-is. The reason for premultiplied colors is that you only need to multiply the colors once on creation as opposed to every time you blend. Your configuration achieves this too, but indirectly.
Thanks to Jessy I decided to look through how I was blending my alphas and I've figured it out. My texture still looks darkened in the GPU debugger, but in the actual app everything looks correct. I made my changes to my pipeline state descriptor, which you can see below.
pipelineStateDescriptor.colorAttachments[0].blendingEnabled = true
pipelineStateDescriptor.colorAttachments[0].rgbBlendOperation = .Add
pipelineStateDescriptor.colorAttachments[0].alphaBlendOperation = .Add
pipelineStateDescriptor.colorAttachments[0].sourceRGBBlendFactor = .DestinationAlpha
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .DestinationAlpha
pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = .OneMinusSourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .OneMinusBlendAlpha

Resources