iOS render-to-texture with metal does not update render texture - ios

I would like to use a fragment shader to render to a texture offscreen. Once this is done I want to use the result of that as the input for another fragment shader.
I create a texture and clear it with red (to know it is set). I use the render pass that is connected to my target texture and draw. I then use a blit command encoder to transfer the contents of that target texture to a buffer. The buffer contains red so I know it is reading the texture correctly but the drawing should make the texture green so something is wrong.
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.textureType = MTLTextureType.type2D
textureDescriptor.width = 2048
textureDescriptor.height = 1024
textureDescriptor.pixelFormat = .rgba8Unorm
textureDescriptor.storageMode = .shared
textureDescriptor.usage = [.renderTarget, .shaderRead, .shaderWrite]
bakeTexture = device.makeTexture(descriptor: textureDescriptor)
bakeRenderPass = MTLRenderPassDescriptor()
bakeRenderPass.colorAttachments[0].texture = bakeTexture
bakeRenderPass.colorAttachments[0].loadAction = .clear
bakeRenderPass.colorAttachments[0].clearColor = MTLClearColor(red:1.0,green:0.0,blue:0.0,alpha:1.0)
bakeRenderPass.colorAttachments[0].storeAction = .store
for drawing I do this:
let bakeCommandEncoder = commandBuffer.makeRenderCommandEncode(descriptor: bakeRenderPass)
let vp = MTLViewport(originX:0, originY:0, width:2048, height:1024,znear:0.0,zfar:1.0)
bakeCommandEncoder.setViewport(vp)
bakeCommandEncoder.setCullMode(.none) // disable culling
// draw here. Fragment shader sets final color to float4(0.0,1.0,0.0,1.0);
bakeCommandEncoder.endEncoding()
let blitEncoder = commandBuffer.makeBlitCommandEncoder()
blitEncoder!.copy(...) // this works as my buffer is all red
blitEncoder.endEncoding()
Here is the vertex shader - it is based on an OpenGL vertex shader to dump out the uv-layout of the texture:
struct VertexOutBake {
float4 position [[position]];
float3 normal;
float3 tangent;
float3 binormal;
float3 worldPosition;
float2 texCoords;
};
vertex VertexOutBake vertex_main_bake(VertexInBake vertexIn [[stage_in]],
constant VertexUniforms &uniforms [[buffer(1)]])
{
VertexOutBake vertexOut;
float4 worldPosition = uniforms.modelMatrix * float4(vertexIn.position, 1);
vertexOut.worldPosition = worldPosition.xyz;
vertexOut.normal = normalize(vertexIn.normal);
vertexOut.tangent = normalize(vertexIn.tangent);
vertexOut.binormal = normalize(vertexIn.binormal);
vertexOut.texCoords = vertexIn.texCoords;
vertexOut.texCoords.y = 1.0 - vertexOut.texCoords.y; // flip image
// now use uv coordinates instead of 3D positions
vertexOut.position.x = vertexOut.texCoords.x * 2.0 - 1.0;
vertexOut.position.y = 1.0 - vertexOut.texCoords.y * 2.0;
vertexOut.position.z = 1.0;
vertexOut.position.w = 1.0;
return vertexOut;
}
The buffer that gets filled as a result of the blit copy should be green but it is red. This seems to mean that either the bakeTexture is not being written to in the fragment shader or that it is, but there is some synchronization missing to make the content available at the time I am doing the copy.

Related

Directx 9 Normal Mapping Pixelshader

I have a question about normal mapping in directx9 shader.
Currently my Terrain shader Output for Normal Map + Diffuse Color only result into this Image.
Which looks good to me.
If i use an empty Normal map image like this one.
My shader output for normal diffuse and color map looks like this.
But if i use 1 including a ColorMap i get a really stange result.
Does anyone have an idea what could cause this issue?
Here is some snippets.
float4 PS_TERRAIN(VSTERRAIN_OUTPUT In) : COLOR0
{
float4 fDiffuseColor;
float lightIntensity;
float3 bumpMap = 2.0f * tex2D( Samp_Bump, In.Tex.xy ).xyz-1.0f;
float3 bumpNormal = (bumpMap.x * In.Tangent) + (bumpMap.y * In.Bitangent) + (bumpMap.z * In.Normal);
bumpNormal = normalize(bumpNormal);
// Direction Light Test ( Test hardcoded )
float3 lightDirection = float3(0.0f, -0.5f, -0.2f);
float3 lightDir = -lightDirection;
// Bump
lightIntensity = saturate(dot( bumpNormal, lightDir));
// We are using a lightmap to do our alpha calculation for given pixel
float4 LightMaptest = tex2D( Samp_Lightmap, In.Tex.zw ) * 2.0f;
fDiffuseColor.a = LightMaptest.a;
if( !bAlpha )
fDiffuseColor.a = 1.0;
// Sample the pixel color from the texture using the sampler at this texture coordinate location.
float4 textureColor = tex2D( Samp_Diffuse, In.Tex.xy );
// Combine the color map value into the texture color.
textureColor = saturate(textureColor * LightMaptest);
textureColor.a = LightMaptest.a;
fDiffuseColor.rgb = saturate(lightIntensity * I_d).rgb;
fDiffuseColor = fDiffuseColor * textureColor; // If i enable this line it goes crazy
return fDiffuseColor;
}

Alpha blending with two transparent textures not working correctly

I have a destination texture:
Here the whole texture will be transparent (alpha = 0) except red color part. Red color will have alpha value of 0.5. I used a rectangle plane to present this texture.
Then i have this source texture. It is also a transparent texture with black color part. Black color will have alpha value of 0.5. I used another rectangle plane to present this texture and i change MTLRenderPipelineDescriptor blending to
pipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
Here blending works fine between two textures.
Then i try to merge these two textures into one destination texture. using MTLComputeCommandEncoder. My kernel function:
kernel void compute(
texture2d<float, access::read_write> des [[texture(0)]],
texture2d<float, access::read> src [[texture(1)]],
uint2 gid [[thread_position_in_grid]])
{
float4 srcColor = src.read(gid);
float4 desColor = des.read(gid);
float srcAlpha = srcColor.a;
float4 outColor = srcColor + desColor * (1 - srcAlpha);
des.write(outColor, gid);
}
But after that blended color will be different than previous. Blending color is lighter than previous one.
How do I properly blend two transparent textures in kernel function? What is wrong with my solution?
I think that you are using premultiplied alpha...
Try this instead (which is not premultiplied alpha):
float4 srcColor = src.read(gid);
float4 desColor = des.read(gid);
float4 outColor;
outColor.a = srcColor.a + desColor.a * (1f - srcColor.a);
if (outColor.a == 0f) {
outColor.r = 0f;
outColor.g = 0f;
outColor.b = 0f;
} else {
outColor.r = (srcColor.r * srcColor.a + desColor.r * desColor.a * (1f - srcColor.a)) / outColor.a;
outColor.g = (srcColor.g * srcColor.a + desColor.g * desColor.a * (1f - srcColor.a)) / outColor.a;
outColor.b = (srcColor.b * srcColor.a + desColor.b * desColor.a * (1f - srcColor.a)) / outColor.a;
}

why sPos.z is uesd to get texcoord in shadow mapping

why use sPos.z here to get tescoord?
Out.shadowCrd.x = 0.5 * (sPos.z + sPos.x);
Out.shadowCrd.y = 0.5 * (sPos.z - sPos.y);
Out.shadowCrd.z = 0;
Out.shadowCrd.w = sPos.z;
It is a shader which achieves shadow mapping in "Shaders for Game Programming and Artists".
The first pass render depth texture in light space.( light is camera and watch towards the origin )
The second pass get the depth and calculate the shadow.
Before these codes, the model has already been transformed to light space.
Then the texcoord should be calculated to read depth texture.
But I can't understand the algorithm of calculating the texcoord. Why sPos.z will be here?
Here is the whole vertex shader of the second pass
float distanceScale;
float4 lightPos;
float4 view_position;
float4x4 view_proj_matrix;
float4x4 proj_matrix;
float time_0_X;
struct VS_OUTPUT
{
float4 Pos: POSITION;
float3 normal: TEXCOORD0;
float3 lightVec : TEXCOORD1;
float3 viewVec: TEXCOORD2;
float4 shadowCrd: TEXCOORD3;
};
VS_OUTPUT vs_main(float4 inPos: POSITION, float3 inNormal: NORMAL)
{
VS_OUTPUT Out;
// Animate the light position.
float3 lightPos;
lightPos.x = cos(1.321 * time_0_X);
lightPos.z = sin(0.923 * time_0_X);
lightPos.xz = 100 * normalize(lightPos.xz);
lightPos.y = 100;
// Project the object's position
Out.Pos = mul(view_proj_matrix, inPos);
// World-space lighting
Out.normal = inNormal;
Out.lightVec = distanceScale * (lightPos - inPos.xyz);
Out.viewVec = view_position - inPos.xyz;
// Create view vectors for the light, looking at (0,0,0)
float3 dirZ = -normalize(lightPos);
float3 up = float3(0,0,1);
float3 dirX = cross(up, dirZ);
float3 dirY = cross(dirZ, dirX);
// Transform into light's view space.
float4 pos;
inPos.xyz -= lightPos;
pos.x = dot(dirX, inPos);
pos.y = dot(dirY, inPos);
pos.z = dot(dirZ, inPos);
pos.w = 1;
// Project it into light space to determine she shadow
// map position
float4 sPos = mul(proj_matrix, pos);
// Use projective texturing to map the position of each fragment
// to its corresponding texel in the shadow map.
sPos.z += 10;
Out.shadowCrd.x = 0.5 * (sPos.z + sPos.x);
Out.shadowCrd.y = 0.5 * (sPos.z - sPos.y);
Out.shadowCrd.z = 0;
Out.shadowCrd.w = sPos.z;
return Out;
}
Pixel Shader:
float shadowBias;
float backProjectionCut;
float Ka;
float Kd;
float Ks;
float4 modelColor;
sampler ShadowMap;
sampler SpotLight;
float4 ps_main(
float3 inNormal: TEXCOORD0,
float3 lightVec: TEXCOORD1,
float3 viewVec: TEXCOORD2,
float4 shadowCrd: TEXCOORD3) : COLOR
{
// Normalize the normal
inNormal = normalize(inNormal);
// Radial distance and normalize light vector
float depth = length(lightVec);
lightVec /= depth;
// Standard lighting
float diffuse = saturate(dot(lightVec, inNormal));
float specular = pow(saturate(
dot(reflect(-normalize(viewVec), inNormal), lightVec)),
16);
// The depth of the fragment closest to the light
float shadowMap = tex2Dproj(ShadowMap, shadowCrd);
// A spot image of the spotlight
float spotLight = tex2Dproj(SpotLight, shadowCrd);
// If the depth is larger than the stored depth, this fragment
// is not the closest to the light, that is we are in shadow.
// Otherwise, we're lit. Add a bias to avoid precision issues.
float shadow = (depth < shadowMap + shadowBias);
// Cut back-projection, that is, make sure we don't lit
// anything behind the light.
shadow *= (shadowCrd.w > backProjectionCut);
// Modulate with spotlight image
shadow *= spotLight;
// Shadow any light contribution except ambient
return Ka * modelColor +
(Kd * diffuse * modelColor + Ks * specular) * shadow;
}

Volumetric Fog Shader - Camera Issue

I am trying to build an infinite fog shader. This fog is applied on a 3D plane.
For the moment I have a Z-Depth Fog. And I encounter some issues.
As you can see in the screenshot, there are two views.
The green color is my 3D plane. The problem is in the red line. It seems that the this line depends of my camera which is not good because when I rotate my camera the line is affected by my camera position and rotation.
I don't know where does it comes from and how to have my fog limit not based on the camera position.
Shader
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
uniform float4 _FogColor;
uniform sampler2D _CameraDepthTexture;
float _Depth;
float _DepthScale;
struct v2f {
float4 pos : SV_POSITION;
float4 projection : TEXCOORD0;
float4 screenPosition : TEXCOORD1;
};
v2f vert(appdata_base v) {
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
// o.projection = ComputeGrabScreenPos(o.pos);
float4 position = o.pos;
#if UNITY_UV_STARTS_AT_TOP
float scale = -1.0;
#else
float scale = 1.0;
#endif
float4 p = position * 0.5f;
p.xy = float2(p.x, p.y * scale) + p.w;
p.zw = position.zw;
o.projection = p;
// o.screenPosition = ComputeScreenPos(o.pos);
position = o.pos;
float4 q = position * 0.5f;
#if defined(UNITY_HALF_TEXEL_OFFSET)
q.xy = float2(q.x, q.y * _ProjectionParams.x) + q.w * _ScreenParams.zw;
#else
q.xy = float2(q.x, q.y * _ProjectionParams.x) + q.w;
#endif
#if defined(SHADER_API_FLASH)
q.xy *= unity_NPOTScale.xy;
#endif
q.zw = position.zw;
q.zw = 1.0f;
o.screenPosition = q;
return o;
}
sampler2D _GrabTexture;
float4 frag(v2f IN) : COLOR {
float3 uv = UNITY_PROJ_COORD(IN.projection);
float depth = UNITY_SAMPLE_DEPTH(tex2Dproj(_CameraDepthTexture, uv));
depth = LinearEyeDepth(depth);
return saturate((depth - IN.screenPosition.w + _Depth) * _DepthScale);
}
ENDCG
}
Next I want to rotate my Fog to have an Y-Depth Fog but I don't know how to achieve this effect.
I see two ways to acheive what you want:
is to render depth of your plane to texture and calculate fog based on difference of depth of plane and depth of object, 0 if obj depth is less and (objDepth - planeDepth) * scale if it is bigger)
Is to instead of rendering to texture calculate distance to plane in shader and use it directly.
I am not sure what you do since I am not very familiar with Unity surface shaders, but djudging from the code and result something different.
It seems that this is caused by _CameraDepthTexture, that's why depth is calculated with the camera position.
But I don't know how to correct it... It seems that there is no way to get the depth from another point. Any idea ?
Here is another example. In green You can "see" the object and the blue line is for me the fog as it should be.

Lookup Pixelshader with HLSL & XNA

First of all I'm new to XNA and HLSL so me knowledge is very limited.
I'm writing a small Application to display a digital elevation model consisting of 16Bit values in 2D by using different colors for different height.
The colormapping is done by a Pixelshader via a lookup texture.
At the moment I'm putting the values into red an green components of a texture2D and map them to colors in a 256x256 texture.
As the coloring is discrete/not continously I set minfilter/magfilter to point what leads to a blocky look when zooming in.
Is there a way to get the linear filtering back after the lookup? Or does anybody know a better way to do the mapping?
Shader:
sampler2D tex1 : register(s0) = sampler_state
{
MinFilter = Point;
MagFilter = Point;
MipFilter = linear;
};
texture2D lookupTex;
sampler2D lookup = sampler_state
{
Texture = <lookupTex>;
MinFilter = Point;
MagFilter = Point;
MipFilter = Point;
};
float4 PixelShaderLookup(float4 incol : COLOR, float2 UV : TEXCOORD0) : COLOR0
{
float4 inCol = tex2D(tex1, UV);
half3 scale = (256 - 1.0) / 256;
half3 offset = 1.0 / (2.0 * 256);
float4 outCol = tex2D(lookup, scale * inCol.gr + offset);
return outCol;
}
Thanks for your help and a happy new year :)

Resources