why sending RGBA data in a BGRA pipelineDescriptor work? - ios

My MTKView is in BGRA and I setup my pipelineDescriptor in BGRA like below :
pipelineDescriptor.colorAttachments.objectAtIndexedSubscript(0).setPixelFormat(MTLPixelFormatBGRA8Unorm);
now the problem is that if I send data encoded in RGBA (I mean color is a 16 bytes encoded in R(4bytes)+G(4bytes)+B(4bytes)+A(4bytes)) to the shader below then it's work well!
RenderCommandEncoder.setVertexBuffer(LvertexBuffer{buffer}, 0{offset}, 0{atIndex})
and
#include <metal_stdlib>
using namespace metal;
#include <simd/simd.h>
struct VertexIn {
vector_float4 color;
vector_float2 pos;
};
struct VertexOut {
float4 color;
float4 pos [[position]];
};
vertex VertexOut vertexShader(
const device VertexIn *vertexArray [[buffer(0)]],
unsigned int vid [[vertex_id]],
constant vector_uint2 *viewportSizePointer [[buffer(1)]]
)
{
// Get the data for the current vertex.
VertexIn in = vertexArray[vid];
VertexOut out;
...
out.color = in.color;
....
return out;
}
fragment float4 fragmentShader(
VertexOut interpolated [[stage_in]],
texture2d<half> colorTexture [[ texture(0) ]]
)
{
return interpolated.color;
}
How it's possible? is it about little and big endian ?

The color(s) you return from a fragment function are assumed to be in RGBA order, regardless of the pixel format of your render target. They get swizzled and/or converted as necessary to match the format of the destination. The same thing happens when sampling/writing other textures: colors always arrive in RGBA order.

Related

Something like vertex_id for Metal fragment shader, to identify fragment?

Vertex shaders in Metal can use a [[vertex_id]] attribute on an integer argument, and that argument will get values between 0 and the number of vertexes. Is there a similar thing for fragment shaders?
I want to write some debug info to a buffer, within the fragment shader, as shown below. Then in the CPU code, I would print the contents of the buffer, as a way to debug what is going on in the fragment shader.
struct VertexIn {
float4 position [[attribute(0)]];
};
struct VertexOut {
float4 position [[position]];
};
vertex VertexOut vertex_main(const VertexIn vertex_in [[stage_in]]) {
VertexOut out;
out.position = vertex_in.position;
return out;
}
fragment float4
fragment_main(VertexOut vo [[stage_in]],
uint fragment_id [[fragment_id]], // <-- Hypothetical, doesn't actually work.
device float4 *debug_out [[buffer(0)]],
device uint *debug_out2 [[buffer(1)]])
{
debug_out[fragment_id] = vo.position;
debug_out2[fragment_id] = ...
return float4(0, 1, clamp(vo.position.x/1000, 0.0, 1.0), 1);
}

metal shader: pass color encoded as 4 bytes INTEGER instead of 8 Bytes FLOAT

I need to send color encoded as in 4 bytes RGBA channel in INTEGER (not float) to my metal shader, but I don't know if metal shader can handle color stored in INTEGER. Actually I translate it in the shader to float4 (don't even know if it's not better to use half4 instead) color but I don't know if it's a good way:
struct VertexIn {
packed_float3 pos;
packed_uchar4 color;
};
struct VertexOut {
float4 pos [[position]];
float4 color;
};
vertex VertexOut vertexShader(const device VertexIn *vertexArray [[buffer(0)]],
const unsigned int vid [[vertex_id]]){
VertexIn in = vertexArray[vid];
VertexOut out;
out.color = float4(float(in.color[2])/255,float(in.color[1])/255,float(in.color[0])/255,float(in.color[3])/255);
out.pos = float4(in.pos.x, in.pos.y, in.pos.z, 1);
return out;
}
fragment float4 fragmentShader(VertexOut interpolated [[stage_in]]){
return interpolated.color;
}

iOS Metal: casting half4 variable to float4 type

I am using a sampler to sample from texture:
constexpr sampler cubeSampler(filter::linear, mip_filter::none);
half4 res = cubeTexture.sample(cubeSampler, texVec);
The result is of type half4 but i need to cast it to float4 in order to perform math operations. How can i perform this cast?
static_cast works, or you can use the more terse converting constructor:
float4 res_float4 = float4(res);
constexpr sampler cubeSampler(filter::linear, mip_filter::none);
half4 res = cubeTexture.sample(cubeSampler, texVec);
// cast to float4:
float4 res_float4 = static_cast<float4>(res);

Writing on a UAV in pixel shader

I was doing some experiment with textures as UAV in pixel shader by writing some value on it, but I'm not seeing its effect in the next draw call when I bind the same texture again as an SRV.
Example shader:
RWTexture2D<unsigned int> uav;
Texture2D tex : register(t0);
// Vertex Shader
float4 VS( float4 Pos : POSITION ) : SV_POSITION
{
return Pos;
}
// Pixel Shader, draw1 warm up
float4 PS( float4 Pos : SV_POSITION ) : SV_Target
{
return float4( 1.0f, 1.0f, 0.0f, 1.0f ); // Yellow, with Alpha = 1
}
// Pixel Shader, we are writing onto the texture by binding it as an UAV, draw2
float4 PS1( float4 Pos : SV_POSITION ) : SV_Target
{
if((Pos.x %2) && (Pos.y %2))
{
uav[Pos.xy]=0xFF000000; //some color
}
else
{
uav[Pos.xy]=0x00FF0000; //some color
}
return float4( 1.0f, 0.0f, 0.0f, 1.0f );
}
// Pixel Shader, here we are accessing texture as an SRV, draw3
float4 PS2( float4 Pos : SV_POSITION ) : SV_Target
{
float4 x = tex[Pos.xy];
return x;
}
I can provide the app source code if required.
I enabled the debug layer. It was UAV format mismatch error. In the UAV description, I declared R8G8B8A8_UNORM as a format and I'm accessing the element as UINT in the shader.
description: D3D11 ERROR: ID3D11DeviceContext::Draw: The resource return type for component 0 declared in the shader code (UINT) is not compatible with the resource type bound to Unordered Access View slot 1 of the Pixel Shader unit (UNORM). This mismatch is invalid if the shader actually uses the view [ EXECUTION ERROR #2097372: DEVICE_UNORDEREDACCESSVIEW_RETURN_TYPE_MISMATCH]
Source code:
D3D11_UNORDERED_ACCESS_VIEW_DESC UAVdesc;
ZeroMemory( &SRVdesc, sizeof(SRVdesc));
UAVdesc.Format=DXGI_FORMAT_R8G8B8A8_UNORM;
UAVdesc.ViewDimension=D3D11_UAV_DIMENSION_TEXTURE2D;
UAVdesc.Texture2D.MipSlice=0;
g_pd3dDevice->CreateUnorderedAccessView( g_pTexture, &UAVdesc, &g_pUAV);
Texture created :
D3D11_TEXTURE2D_DESC TextureData;
ZeroMemory( &TextureData, sizeof(TextureData) );
TextureData.ArraySize=1;
TextureData.Height=height;
TextureData.Width=width;
TextureData.Format=DXGI_FORMAT_R8G8B8A8_TYPELESS;
TextureData.CPUAccessFlags=0;
TextureData.BindFlags=D3D11_BIND_SHADER_RESOURCE|D3D11_BIND_RENDER_TARGET|D3D11_BIND_UNORDERED_ACCESS;
TextureData.MipLevels=1;
TextureData.MiscFlags=0;
TextureData.SampleDesc.Count=1;
TextureData.SampleDesc.Quality=0;
TextureData.Usage=D3D11_USAGE_DEFAULT;
D3D11_SUBRESOURCE_DATA InitialData;
ZeroMemory( &InitialData, sizeof(InitialData));
InitialData.pSysMem=pData;
InitialData.SysMemPitch=width * sizeof(UINT);
InitialData.SysMemSlicePitch=width * sizeof(UINT) * height;
g_pd3dDevice->CreateTexture2D( &TextureData, &InitialData, &g_pTexture);
Shader code is already given above.
Fix:
D3D11_UNORDERED_ACCESS_VIEW_DESC UAVdesc;
ZeroMemory( &SRVdesc, sizeof(SRVdesc));
**UAVdesc.Format=DXGI_FORMAT_R32_UINT;**
UAVdesc.ViewDimension=D3D11_UAV_DIMENSION_TEXTURE2D;
UAVdesc.Texture2D.MipSlice=0;
g_pd3dDevice->CreateUnorderedAccessView( g_pTexture, &UAVdesc, &g_pUAV);
confirmed by dumping texture in staging resource. Thanks guys.

Pass an array to HLSL vertex shader as an argument?

I need to pass an array to my vertex shader as an argument in Direct3D. The signature of the shader function looks like the following:
ReturnDataType main(float3 QuadPos : POSITION0, float4 OffsetArray[4] : COLOR0)
Is the signature OK? How do I define input layout description?
Thanks in Advance.
From HLSL references page, The function arguments only support intrinsic type, and user-defined type, the user-defined type also rely on intrinsic type, which does not support native array type, vector type or struct type might be your choice.
here is an example to use struct, you can simply build up a struct and pass in it as below like VS_INPUT.
//--------------------------------------------------------------------------------------
// Input / Output structures
//--------------------------------------------------------------------------------------
struct VS_INPUT
{
float4 vPosition : POSITION;
float3 vNormal : NORMAL;
float2 vTexcoord : TEXCOORD0;
};
struct VS_OUTPUT
{
float3 vNormal : NORMAL;
float2 vTexcoord : TEXCOORD0;
float4 vPosition : SV_POSITION;
};
//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
VS_OUTPUT VSMain( VS_INPUT Input )
{
VS_OUTPUT Output;
Output.vPosition = mul( Input.vPosition, g_mWorldViewProjection );
Output.vNormal = mul( Input.vNormal, (float3x3)g_mWorld );
Output.vTexcoord = Input.vTexcoord;
return Output;
}

Resources