So, I'm trying to render a cube with 3d texture. Texture contains 3 slices of 3 diferent colors, red green and blue. Each slice consists of 4 pixels with the same color. Works fine. https://imgur.com/a/a5oXi
private func makeTexture() {
let width = 2
let height = 2
let depth = 3
let byteSize = 4
let bytesPerRow = byteSize * width
let bytesPerImage = bytesPerRow * height
let blue: UInt32 = 0x000000FF
let green: UInt32 = 0xFF00FF00
let red: UInt32 = 0x00FF0000
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.pixelFormat = .bgra8Unorm
textureDescriptor.width = width
textureDescriptor.height = height
textureDescriptor.depth = depth
textureDescriptor.textureType = .type3D
let image = UnsafeMutableRawPointer.allocate(bytes: width*height*depth*byteSize, alignedTo: 1)
image.storeBytes(of: red, toByteOffset: 0, as: UInt32.self)
image.storeBytes(of: red, toByteOffset: 4, as: UInt32.self)
image.storeBytes(of: red, toByteOffset: 8, as: UInt32.self)
image.storeBytes(of: red, toByteOffset: 12, as: UInt32.self)
image.storeBytes(of: green, toByteOffset: 16, as: UInt32.self)
image.storeBytes(of: green, toByteOffset: 20, as: UInt32.self)
image.storeBytes(of: green, toByteOffset: 24, as: UInt32.self)
image.storeBytes(of: green, toByteOffset: 28, as: UInt32.self)
image.storeBytes(of: blue, toByteOffset: 32, as: UInt32.self)
image.storeBytes(of: blue, toByteOffset: 36, as: UInt32.self)
image.storeBytes(of: blue, toByteOffset: 40, as: UInt32.self)
image.storeBytes(of: blue, toByteOffset: 44, as: UInt32.self)
texture = device?.makeTexture(descriptor: textureDescriptor)
let region = MTLRegionMake3D(0, 0, 0, width, height, depth)
texture?.replace(region: region,
mipmapLevel: 0,
slice: 0,
withBytes: image,
bytesPerRow: bytesPerRow,
bytesPerImage: bytesPerImage)
}
fragment shader code:
struct VertexOut{
float4 position [[position]];
float3 textureCoordinate;
};
fragment half4 basic_fragment(VertexOut in [[stage_in]],
texture3d<half> colorTexture [[ texture(0) ]]) {
constexpr sampler textureSampler (mag_filter::nearest,
min_filter::nearest);
// Sample the texture to obtain a color
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);
// We return the color of the texture
return colorSample;
}
Then i want to make red and blue slices transparent, so i set alphas equals to 0
let blue: UInt32 = 0x000000FF
let green: UInt32 = 0xFF00FF00
let red: UInt32 = 0x00FF0000
fragment shader now contains
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);
if (colorSample.a <= 0)
discard_fragment();
and expect to see a cut with green color but i see just green edges https://imgur.com/a/yGQdQ.
There is nothing inside the cube and i dont even see back edges because cullMode is set to .front.
Can I draw and see the texture within the object so i can see the insides of it? I haven't found the way so far. Isn't it when i set texture type to 3d, it should calculate the color for each pixel of the 3D object? not just the edges? Maybe it does, but doesn't display?
No, 3D textures don't get you that.
There is no 3D object, there are just triangles (which you provide). Those are 2D objects, although they are positioned within 3D space. Metal does not try to figure out what solid object you're trying to draw by extrapolating from the triangles you tell it to draw. No common 3D-drawing API does that. It's not generally possible. Among other things, keep in mind that you don't even have to give all of the triangles to Metal together; they could be split across draw calls.
There is no "inside" to any object, as far as Metal knows, just points, lines, and triangles. If you want to render the inside of an object, you have to model that. For a slice of a cube, you have to compute the new surfaces of the "exposed inside" and pass triangles to Metal to draw that.
A 3D texture is just a texture that you can sample with a 3D coordinate. Note that the decision about what fragments to draw has already been made before your fragment shader is called and Metal doesn't even know you'll be using a 3D texture at the time it makes those decisions.
Related
I'm using ARKit with scene reconstruction and need to rendered the captured scene geometry in metal. I can access this geometry through the ARMeshAnchor.geometry, which is a ARMeshGeometry. However when I try rendering it using my custom metal rendering pipeline, nothing renders and I get a bunch of errors like this:
Invalid device load executing vertex function "myVertex" encoder: "0", draw: 3, at offset 4688
Here's a highly simplified version of my code that I've been using for debugging:
struct InOut {
float4 position [[position]];
};
vertex InOut myVertex(
uint vid [[vertex_id]],
const constant float3* vertexArray [[buffer(0)]])
{
TouchInOut out;
const float3 in = vertexArray[vid];
out.position = float4(in.position, 1);
}
fragment float4 myFragment(TouchInOut in [[stage_in]]){
return float4(1, 0, 0, 1);
}
// Setup MTLRenderPipelineDescriptor
let pipelineDescriptor = MTLRenderPipelineDescriptor()
pipelineDescriptor.colorAttachments[0].pixelFormat = .rgba8Unorm
pipelineDescriptor.sampleCount = 1
pipelineDescriptor.vertexFunction = defaultLibrary.makeFunction(name: "myVertex")
pipelineDescriptor.fragmentFunction = defaultLibrary.makeFunction(name: "myFragment")
let vertexDescriptor = MTLVertexDescriptor()
vertexDescriptor.attributes[0].format = .float3
vertexDescriptor.attributes[0].offset = 0
vertexDescriptor.attributes[0].bufferIndex = 0
vertexDescriptor.layouts[0].stride = MemoryLayout<SIMD3<Float>>.stride
pipelineDescriptor.vertexDescriptor = vertexDescriptor
func render(arMesh: ARMeshAnchor) -> void {
// snip... — Setting up command buffers
let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)!
renderEncoder.setViewport(MTLViewport(originX: 0, originY: 0, width: 512, height: 512, znear: 0, zfar: 1))
renderEncoder.setRenderPipelineState(renderPipelineState)
let vertices = arMesh.geometry.vertices
let faces = arMesh.geometry.faces
renderEncoder.setVertexBuffer(vertices.buffer, offset: 0, index: 0)
renderEncoder.drawIndexedPrimitives(type: .triangle, indexCount: faces.count * 3, indexType: .uint32, indexBuffer: buffer, indexBufferOffset: 0)
renderEncoder.endEncoding()
// snip... — Clean up
}
I can't figure out why this code causes the metal exception. It stops throwing if I cap vid in the shader to around 100, but it still doesn't draw anything properly
What's going on here? Why does my code produce an error and how can I fix it?
The problem here is the alignment/packing of the vertex data.
Each vertex in ARMeshGeometry.vertices consists of 3 float components, for a total size of 12 bytes. The code above assumes that this means the data is a float3 / SIMD3<Float>, however the vertices from ARMeshGeometry are actually tightly packed. So while SIMD3<Float> has a stride of 16, the actual vertex data has a stride of 12.
The larger size of float3 (16) vs the actual size of elements in the vertices buffer (12) results in metal trying to access data off the end of the vertices buffer, producing the error.
There are two important fixes here:
Make sure the MTLVertexDescriptor has the correct stride:
let exampleMeshGeometry: ARMeshGeometry = ...
vertexDescriptor.layouts[0].stride = exampleMeshGeometry.vertices.stride
In the shader, use packed_float3 instead of float3
vertex InOut myVertex(
uint vid [[vertex_id]],
const constant packed_float3* vertexArray [[buffer(0)]])
{
...
}
After fixing these issues, you should be able to properly transfer ARMeshGeometry buffers to your metal shader
Ok we have draw several things in a MTKView We can move and turn around them using Metal, MetalKit functions but we are unable to get pixel information of a given point in 3D (x,y,z). We searched several hours and could not find any solution for that.
impossible to get color from 3D model space, but possible to get color from 2D view space.
func getColor(x: CGFloat, y: CGFloat) -> UIColor? {
if let curDrawable = self.currentDrawable {
var pixel: [CUnsignedChar] = [0, 0, 0, 0] // bgra
let textureScale = CGFloat(curDrawable.texture.width) / self.bounds.width
let bytesPerRow = curDrawable.texture.width * 4
let y = self.bounds.height - y
curDrawable.texture.getBytes(&pixel, bytesPerRow: bytesPerRow, from: MTLRegionMake2D(Int(x * textureScale), Int(y * textureScale), 1, 1), mipmapLevel: 0)
let red: CGFloat = CGFloat(pixel[2]) / 255.0
let green: CGFloat = CGFloat(pixel[1]) / 255.0
let blue: CGFloat = CGFloat(pixel[0]) / 255.0
let alpha: CGFloat = CGFloat(pixel[3]) / 255.0
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color
}
return nil
}
This seems like a classic case of trying to consult your view (in the Model-View-Controller paradigm sense of the term) for information held by the model. Consult your model directly.
A 3D rendering technology like Metal is all about flattening 3D information to 2D and efficiently throwing away information not relevant to that 2D representation. Neither Metal nor the MTKView has any information about points in 3D by the end of the render.
Also note, your model probably doesn't have information for arbitrary 3D points. Most likely all of your 3D models (in the other sense of the term) are shells. They have no interiors, just surfaces. So, unless a 3D point falls exactly on the surface and not at all inside nor outside of the model, it has no color.
I have been trying to use texture2d_array for my application of live filters in metal. But I'm not getting the proper result.
Im creating the texture array like this,
Code: Class MetalTextureArray.
class MetalTextureArray {
private(set) var arrayTexture: MTLTexture
private var width: Int
private var height: Int
init(_ width: Int, _ height: Int, _ arrayLength: Int, _ device: MTLDevice) {
self.width = width
self.height = height
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.textureType = .type2DArray
textureDescriptor.pixelFormat = .bgra8Unorm
textureDescriptor.width = width
textureDescriptor.height = height
textureDescriptor.arrayLength = arrayLength
arrayTexture = device.makeTexture(descriptor: textureDescriptor)
}
func append(_ texture: MTLTexture) -> Bool {
if let bytes = texture.buffer?.contents() {
let region = MTLRegion(origin: MTLOrigin(x: 0, y: 0, z: 0), size: MTLSize(width: width, height: height, depth: 1))
arrayTexture.replace(region: region, mipmapLevel: 0, withBytes: bytes, bytesPerRow: texture.bufferBytesPerRow)
return true
}
return false
}
}
Im encoding this texture into the renderEncoder like this,
Code:
let textureArray = MetalTextureArray.init(firstTexture!.width, firstTexture!.height, colorTextures.count, device)
_ = textureArray.append(colorTextures[0].texture)
_ = textureArray.append(colorTextures[1].texture)
_ = textureArray.append(colorTextures[2].texture)
_ = textureArray.append(colorTextures[3].texture)
_ = textureArray.append(colorTextures[4].texture)
renderEncoder.setFragmentTexture(textureArray.arrayTexture, at: 1)
Finally I'm accessing the texture2d_array in the fragment shader like this,
Code:
struct RasterizerData {
float4 clipSpacePosition [[position]];
float2 textureCoordinate;
};
multipleShader(RasterizerData in [[stage_in]],
texture2d<half> colorTexture [[ texture(0) ]],
texture2d_array<half> texture2D [[ texture(1) ]])
{
constexpr sampler textureSampler (mag_filter::linear,
min_filter::linear,
s_address::repeat,
t_address::repeat,
r_address::repeat);
// Sample the texture and return the color to colorSample
half4 colorSample = colorTexture.sample (textureSampler, in.textureCoordinate);
float4 outputColor;
half red = texture2D.sample(textureSampler, in.textureCoordinate, 2).r;
half green = texture2D.sample(textureSampler, in.textureCoordinate, 2).g;
half blue = texture2D.sample(textureSampler, in.textureCoordinate, 2).b;
outputColor = float4(colorSample.r * red, colorSample.g * green, colorSample.b * blue, colorSample.a);
// We return the color of the texture
return outputColor;
}
The textures I'm appending to the texture array are the texture which are extracted from acv curve file which is of size 256 * 1.
In this code half red = texture2D.sample(textureSampler, in.textureCoordinate, 2).r; I gave the last argument as 2 because I thought it as the index of the texture to be accessed. But I don't know what it means.
But after doing all these I'm getting the black screen. Even I have other fragment shaders and all of them are working fine. But for this fragment shader I'm getting black screen. I think for this code half blue = texture2D.sample(textureSampler, in.textureCoordinate, 2).b I'm getting 0 for all the red, green, and blue values.
Edit 1:
As suggested I used blitcommandEncoder to copy the texture and still no result.
My code goes here,
My MetalTextureArray class has come modifications.
Method append goes like this.
func append(_ texture: MTLTexture) -> Bool {
self.blitCommandEncoder.copy(from: texture, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOrigin(x: 0, y: 0, z: 0), sourceSize: MTLSize(width: texture.width, height: texture.height, depth: 1), to: self.arrayTexture, destinationSlice: count, destinationLevel: 0, destinationOrigin: MTLOrigin(x: 0, y: 0, z: 0))
count += 1
return true
}
And Im appending the texture like this
let textureArray = MetalTextureArray.init(256, 1, colorTextures.count, device, blitCommandEncoder: blitcommandEncoder)
for (index, filter) in colorTextures!.enumerated() {
_ = textureArray.append(colorTextures[index].texture)
}
renderEncoder.setFragmentTexture(textureArray.arrayTexture, at: 1)
My shader code goes like this
multipleShader(RasterizerData in [[stage_in]],
texture2d<half> colorTexture [[ texture(0) ]],
texture2d_array<float> textureArray [[texture(1)]],
const device struct SliceDataSource &sliceData [[ buffer(2) ]])
{
constexpr sampler textureSampler (mag_filter::linear,
min_filter::linear);
// Sample the texture and return the color to colorSample
half4 colorSample = colorTexture.sample (textureSampler, in.textureCoordinate);
float4 outputColor = float4(0,0,0,0);
int slice = 1;
float red = textureArray.sample(textureSampler, in.textureCoordinate, slice).r;
float blue = textureArray.sample(textureSampler, in.textureCoordinate, slice).b;
float green = textureArray.sample(textureSampler, in.textureCoordinate, slice).g;
outputColor = float4(colorSample.r * red, colorSample.g * green, colorSample.b * blue, colorSample.a);
// We return the color of the texture
return outputColor;
}
Still I get the black screen.
In the method textureArray.sample(textureSampler, in.textureCoordinate, slice); what is the third parameter. I though it as an index and I gave some random index to fetch the random texture. Is it correct?
Edit 2:
I finally able to implement the suggestion and I got the result by using endEncoding method before another encoder is implemented and I got the following screen with the ACV negative filter.
.
Can someone suggest me.
Thanks.
There's a difference between an array of textures and an array texture. It sounds to me like you just want an array of textures. In that case, you should not use texture2d_array; you should use array<texture2d<half>, 5> texture_array [[texture(1)]].
In the app code, you can either use multiple calls to setFragmentTexture() to assign textures to sequential indexes or you can use setFragmentTextures() to assign a bunch of textures to a range of indexes all at once.
In the shader code, you'd use array subscripting syntax to refer to the individual textures in the array (e.g. texture_array[2]).
If you really do want to use an array texture, then you probably need to change your append() method. First, if the texture argument was not created with the makeTexture(descriptor:offset:bytesPerRow:) method of MTLBuffer, then texture.buffer will always be nil. That is, textures only have associated buffers if they were originally created from a buffer. To copy from texture to texture, you should use a blit command encoder and its copy(from:sourceSlice:sourceLevel:sourceOrigin:sourceSize:to:destinationSlice:destinationLevel:destinationOrigin:) method.
Second, if you want to replace the texture data for a specific slice (array element) of the array texture, you need to pass that slice index in as an argument to the replace() method. For that, you'd need to use the replace(region:mipmapLevel:slice:withBytes:bytesPerRow:bytesPerImage:) method, not the replace(region:mipmapLevel:withBytes:bytesPerRow:) as you're currently doing. Your current code is just replacing the first slice over and over (assuming the source textures really are associated with a buffer).
I'm implementing a colour selection tool similar to photoshop's magic wand tool in ios.
I have already got it working in RGB, but to make it more accurate I want to make it work in LAB colour space.
The way it currently works is that it takes a UIImage, creates a CGImage version of that image. It then creates a CGContext in RGB colourspace, draws the CGImage in that context, takes the context data and then binds that to a pixel buffer which uses a struct RGBA32.
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colspace, bitmapInfo: bitmapInfo) else {
print("unable to create context")
return nil
}
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let buffer = context.data else {
print("unable to get context data")
return nil
}
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
struct RGBA32: Equatable {
var color: UInt32
var redComponent: UInt8 {
return UInt8((color >> 24) & 255)
}
var greenComponent: UInt8 {
return UInt8((color >> 16) & 255)
}
var blueComponent: UInt8 {
return UInt8((color >> 8) & 255)
}
var alphaComponent: UInt8 {
return UInt8((color >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
color = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
}
static let clear = RGBA32(red: 0, green: 0, blue: 0, alpha: 0)
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
static func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
return lhs.color == rhs.color
}
}
It then uses that pixel buffer to quickly compare the colour values to selected pixel using a very simple euclidian distance as detailed here.
https://en.wikipedia.org/wiki/Color_difference
As I said it works but for more accurate results I want it to work is CIE Lab Colour space.
Initially I tried converting each pixel to LAB colour as they were checked, then used CIE94 comparison as detailed in the above colour difference link. It worked but was very slow, I guess because it had to convert a million pixels(or so) to LAB colour before checking.
It then struck me that to make it work quickly it would be better to store the pixel buffer in LAB colourspace(it's not used for anything else).
So I created a similar struct LABA32
struct LABA32:Equatable {
var colour: UInt32
var lComponent: UInt8 {
return UInt8((colour >> 24) & 255)
}
var aComponent: UInt8 {
return UInt8((colour >> 16) & 255)
}
var bComponent: UInt8 {
return UInt8((colour >> 8) & 255)
}
var alphaComponent: UInt8 {
return UInt8((colour >> 0) & 255)
}
init(lComponent: UInt8, aComponent: UInt8, bComponent: UInt8, alphaComponent: UInt8) {
colour = (UInt32(lComponent) << 24) | (UInt32(aComponent) << 16) | (UInt32(bComponent) << 8) | (UInt32(alphaComponent) << 0)
}
static let clear = LABA32(lComponent: 0, aComponent: 0, bComponent: 0, alphaComponent: 0)
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
static func ==(lhs: LABA32, rhs: LAB32) -> Bool {
return lhs.colour == rhs.colour
}
I might be wrong but in theory if I draw the CGImage in a context with a LAB colourspace instead of Device RGB it should map the data to this new struct.
The problem I'm having is actually creating the colourspace(let alone test if this theory will actually work).
To create a LAB colourspace I am trying to use this constructor
CGColorSpace(labWhitePoint: <UnsafePointer<CGFloat>!>, blackPoint: <UnsafePointer<CGFloat>!>, range: <UnsafePointer<CGFloat>!>)
According to apple documentation
whitePoint: An array of 3 numbers that specify the tristimulus value,
in the CIE 1931 XYZ-space, of the diffuse white point.
blackPoint: An
array of 3 numbers that specify the tristimulus value, in CIE 1931
XYZ-space, of the diffuse black point.
range: An array of 4 numbers
that specify the range of valid values for the a* and b* components of
the color space. The a* component represents values running from green
to red, and the b* component represents values running from blue to
yellow.
So I've created 3 arrays of CGFloats
var whitePoint:[CGFloat] = [0.95947,1,1.08883]
var blackPoint:[CGFloat] = [0,0,0]
var range:[CGFloat] = [-127,127,-127,127]
I then try to construct the colour space
let colorSpace = CGColorSpace(labWhitePoint: &whitePoint, blackPoint: &blackPoint, range: &range)
The problem is that I keep getting error "unsupported color space" so I must be doing something completely wrong. I've spent a lot of time looking for others trying to construct a LAB colour space but there doesn't seem to be anything relevant, even trying to find objective-C versions.
So how do I actually create a LAB colourspace correctly ?
Thanks.
The documentation also says:
Important: iOS does not support device-independent or generic color spaces. iOS applications must use device color spaces instead.
So if you want to work in LAB I guess you have to do the transformation manually.
I have a texture image that I am using with GLKit. If I use GL_MODULATE on the texture and have vertex RGBA (1.0, 1.0, 1.0, 1.0) then the texture shows up entirely as it would do in GL_REPLACE. Fully opaque.
Then if I use Red (1.0, 0.0, 0.0, 1.0) for vertex RGB the texture shows up again as Red modulating the texture.
So far so good.
But when I change the transparency in the vertex color and I use RGBA(1.0, 0.0, 0.0, 0.5), then only a light red color is seen and the texture is not visible, so the color is replacing the texture entirely.
The texture itself has no alpha, it is RGB565 texture.
I am using GLKit with GLKTextureEnvModeModulate.
self.effect.texture2d0.envMode = GLKTextureEnvModeModulate;
Any help on why the texture would disappear, when I specify the alpha?
Adding snapshots:
This is the original texture
RGBA (1.0, 1.0, 1.0, 1.0) - white color , no premultiplication, opaque, texture visible
RGBA (1.0, 1.0, 1.0, 0.5) - white color, no premultiplication, alpha = 0.5, texture lost
RGBA (1.0, 0, 0, 1.0) - red color , no premultiplication, opaque, texture visible
RGBA (1.0, 0, 0, 0.5) - red color, no premultiplication, alpha = 0.5, texture lost
RGBA (0.5, 0, 0, 0.5) - red color, premultiplication, alpha = 0.5 per #andon, texture visible, but you may need to magnify to see it
RGBA (0.1, 0, 0, 0.1) - red color, premultiplication, alpha = 0.1 per #andon, texture lost, probably because not enough contrast is there
RGBA (0.9, 0, 0, 0.9) - red color, premultiplication, alpha = 0.9 per #andon, texture visible, but you may need to magnify to see it
The texture itself has no alpha, it is RGB565 texture
RGB565 implicitly has constant alpha (opaque -> 1.0). That may not sound important, but modulating vertex color with texture color does a component-wise multiplication and that would not work at all if alpha were not 1.0.
My blend function is for pre-multiplied - One, One - Src.
This necessitates pre-multiplying the RGB components of vertex color by the A component. All colors must be pre-multiplied, this includes texels and vertex colors.
You can see why below:
Vtx = (1.0, 0.0, 0.0, 0.5)
Tex = (R, G, B, 1.0)
// Modulate Vertex and Tex
Src = Vtx * Tex = (R, 0, 0, 0.5)
// Pre-multiplied Alpha Blending (done incorrectly)
Blend_RGB = Src * 1 + (1 - Src.a) * Dst
= Src + Dst / 2.0
= (R, 0, 0) + Dst / 2.0
The only thing this does is divide the destination color by 2 and add the unaltered source color to it. It is supposed to resemble linear interpolation (a * c + (1 - c) * b).
Proper blending should look like this:
// Traditional Blending
Blend_RGB = Src * Src.a + (1 - Src.a) * Dst
= (0.5R, 0, 0) + Dst / 2.0
This can be accomplished using the original blend function if you multiply the RGB part of the vertex color by A.
Correct pre-multiplied alpha blending (by pre-multiplying vertex color):
Vtx = (0.5, 0.0, 0.0, 0.5) // Pre-multiply: RGB *= A
Tex = (R, G, B, 1.0)
// Modulate Vertex and Tex
Src = Vtx * Tex = (0.5R, 0, 0, 0.5)
// Pre-multiplied Alpha Blending (done correctly)
Blend_RGB = Src * 1 + (1 - Src.a) * Dst
= (0.5R, 0, 0) + Dst / 2.0