Read framebuffer in metal shader - ios

Render an image in two steps.
It needs two pair vertex and fragment shader.
In the second step, the second fragment shader needs the frame buffer rendered by first pair shader.
How to get the frame buffer or the color in a special coordinate.
I have read the following answer.
iOS Metal Shader - Texture read and write access?
How to chain filters in Metal for iOS?
fragment float4 second_fragment(VertexOut vert [[stage_in]],
texture2d<float> tex [[texture(0)]],
float4 framebuffer [[color(0)]]
) {
float4 textureColor = tex.sample(basicSampler,vert.texCoor);
return textureColor;
}
[[color(0)]] qualifier only get the color in current coordinate. I need to know the color in any other coordinate.

Related

Creating texture in Vertex shader and passing to Fragment to achieve smudging brush with Metal?

I am trying to add a smudge effect to my paint brush project. To achieve that, I think I need to sample the the current results (which is in paintedTexture) from the start of the brush stroke coordinates and pass it to the fragment shader.
I have a vertex shader such as:
vertex VertexOut vertex_particle(
device Particle *particles [[buffer(0)]],
constant RenderParticleParameters *params [[buffer(1)]],
texture2d<half> imageTexture [[ texture(0) ]],
texture2d<half> paintedTexture [[ texture(1) ]],
uint instance [[instance_id]])
{
VertexOut out;
Drawing a fragment shader such as:
fragment half4 fragment_particle(VertexOut in [[ stage_in ]],
half4 existingColor [[color(0)]],
texture2d<half> brushTexture [[ texture(0) ]],
float2 point [[ point_coord ]]) {
Is it possible to create a clipped texture from the paintedTexture and send it to the fragment shader?
paintedTexture is the current results that have been painted to the canvas. I would like to create a new texture from paintedTexture using the same area as the brush texture and pass it to the fragment shader.
The existingColor [[color(0)]] in the fragment shader is of no use since it is the current color, not the color at the beginning of a stroke. If I use existingColor, it's like using transparency (or a transfer mode based on what math is used to combine it with a new color).
If I am barking up the wrong tree, any suggestions on how to achieve a smudging effect with Metal would potentially be acceptable answers.
Update: I tried using a texture2d in the VertexOut struct:
struct VertexOut {
float4 position [[ position ]];
float point_size [[ point_size ]];
texture2d<half> paintedTexture;
}
But it fails to compile with the error:
vertex function has invalid return type 'VertexOut'
It doesn't seem possible to have an array in the VertexOut struct either (which isn't nearly as ideal as a texture, but it could be a path forward):
struct VertexOut {
float4 position [[ position ]];
float point_size [[ point_size ]];
half4 paintedPixels[65536];
}
Gives me the error:
type 'VertexOut' is not valid for attribute 'stage_in'
It's not possible for shaders to create textures. They could fill an existing one, but I don't think that's what you want or need, here.
I would expect you could pass paintedTexture to the fragment shader and use the vertex shader to note where, from that texture, to sample. So, just coordinates.

Masking Two Textures in Metal

I have two different textures. One is Colored one and other is simply alpha image I want to mask both Image Textures. How I can do that in Metal Shader Language. one texture is 128*128 size other is 256*256 in size. I want the mask texture in the size of 128*128.
fragment float4 fragmentShaderone(VertexOut params[[stage_in]],
texture2d<float, access::sample>srcTexture [[texture(0)]],
texture2d<float, access::sample> maskTexture [[texture(1)]])
{
constexpr sampler defaultSampler;
float srcColor = float4(texture.sample(defaultSampler, float2(params.textureCoordinates))) * float4(1,0,0,40.0/255.0);
float4 maskColor = float4(texture4.sample(defaultSampler, float2(params.textureCoordinates))) ;
return srcColor * maskColor
}
Here In sampling Texture I am using same coordinates for mask and source image.

Rendering per-vertex data to texture (Direct3D 11)

I generate simple 2D grid with triangle strip representing water surface. First generated vertex has position [0,0] and the last one has [1,1]. For my water simulation I need to store current positions of vertices to a texture and then sample these values from the texture in the next frame to get the previous state of the water surface.
So, I created the texture in a size of vertices. For example if I will have a 10x10 vertices grid, I use a texture with 10x10 pixels (one pixel for one vertex data). And set this texture as a render target to render all vertex data into it.
According to this: MSDN Coordinate Systems, If I will use current positions of vertices in the grid (bottom-left at [0;0], top-right at [1;1]), rendered texture looks like this:
So I need to do some conversion to NDC. I convert it in a vertex shader like this:
[vertex.x * 2 - 1; vertex.y * 2 - 1]
Consider this 3x3 grid:
Now, grid is stretched to whole texture size. Texture coordinates are different from NDC and apparently I can use original coordinates of the grid (before conversion) to sample values from the texture and get previous values (positions) of vertices.
Here is a sample of my vertex/pixel shader code:
This vertex shader converts coordinates and sends it to pixel shader with SV_POSITION semantics (describes the pixel location).
struct VertexInput
{
float4 pos : POSITION;
float2 tex : TEXCOORD;
};
struct VertexOutput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
// convertes coordinates from 0,0 origin to -1,-1, etc.
float2 toNDC(float2 px)
{
return float2(px.x * 2 - 1, px.y * 2 - 1);
}
VertexOutput main( VertexInput input )
{
VertexOutput output;
float2 ndc = toNDC(float2(input.pos.x, input.pos.z));
output.pos = float4(ndc, 1, 1);
output.tex = float2(input.pos.x, input.pos.z);
return output;
}
And here's the pixel shader saving values from vertex shader at defined pixel location (SV_POSITION).
struct PixelInput
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
float4 main(PixelInput input) : SV_TARGET
{
return float4(input.tex.x, input.tex.y, 0, 1);
}
And we're finally getting to my problem! I use graphics debugger in Visual Studio 2012 which allows me to look at the rendered texture and its values. I would expect that at the pixel location [0,1] (in texel coordinate system) should be value [0,0] (or [0,0,0,1] to be precise, for RGBA format) but it seems that value of final pixel is interpolated between 3 vertices and I have a wrong value for a given vertex.
Screenshot from VS graphics debugger:
Rendered 3x3 texture ([0;1] location in texel coordinate system):
Values from vertex and pixel shader:
How to render the exact value from vertex shader to texture for a given pixel?
I am pretty new to computer graphics and Direct3D 11, so please excuse my deficiencies.

OpenGL ES 2 - Drawing GL_POINTS directly vs indirectly

I am creating an iOS app for drawing / sketching and right now encountering a problem when I draw GL_POINTS indirectly to an FBO that then this FBO is stamped onto a final FBO.
Here is the result when I draw the GL_POINTS DIRECTLY to an FBO
And here is the result when I draw the points INDIRECTLY by drawing to an FBO and then draw this FBO onto another FBO
As you can see, the indirect method didn't blend quite right. I don't know if the problem is because of my blend mode is wrong or because there's a loss precision when drawing indirectly.
Here is my algorithm :
I. Drawing the points to an offscreen FBO named drawingFramebuffer:
// pre-multiplied alpha
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBindFramebuffer(GL_FRAMEBUFFER, drawingFramebuffer);
// clear drawing FBO
glClearColor(0, 0, 0, 0);
glClear(GL_COLOR_BUFFER_BIT);
...
// draw the points
glDrawArrays(GL_POINTS, 0, totalPoints);
in the fragment shader
uniform sampler2D brushTexture;
uniform highp vec4 brushColor;
void main()
{
highp vec4 textureAlpha = texture2D(brushTexture, gl_PointCoord.xy);
gl_FragColor = vec4(brushColor.rgb * textureAlpha.a, textureAlpha.a);
}
II. And then, stamping the drawingFramebuffer onto final Framebuffer by using a quad
// draw the texture using pre-multiplied alpha
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBindFramebuffer(GL_FRAMEBUFFER, finalFramebuffer);
...
// draw the quad vertices using triangle strip
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
in the fragment shader
uniform sampler2D texture;
varying highp vec2 textureCoord;
void main()
{
highp vec4 textureColor = texture2D(texture, textureCoord);
gl_FragColor = textureColor;
}
I'm utterly confused, how can drawing directly and indirectly yields different results when the blend modes are the same.
Thanks guys/girls for any help!
--- Edited to Add ---
After some calculations with excel, I found out that my blending is already correct, so I suspect the problem is the loss of precision that's happening when reading the drawing FBO
Okay, I've finally fixed it by :
Disregarding the RGB calculation when drawing the GL_POINTS. Bottom line is, rgb value is the culprit. So I'm only focusing on the alpha calculation when drawing GL_POINTS (by using default pre-multiplied blending).
When 'stamping' the drawing FBO, this is when I applied the coloring. By inserting the color value as a uniform and set the fragment color as this color value multiplied by alpha.
I think this is a method that Procreate or other drawing apps use. Although now I have a problem of what would happen if the color value is varied (not a uniform)...

Normal calculations in vertex shader?

I'm using the rastertek framework for terrain generation. I've got the terrain rendered from the vertex shader but I don't know how to calculate the normals in the shader. Theres a function call in one of the classes that generates the normals from the terrain but this only works if the terrain was generated on the cpu. Heres the code for the vertex shader I'm using:
////////////////////////////////////////////////////////////////////////////////
// Filename: terrain.vs
////////////////////////////////////////////////////////////////////////////////
#include "terrain.fx"
/////////////
// GLOBALS //
/////////////
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
//////////////
// TYPEDEFS //
//////////////
struct VertexInputType
{
float4 position : POSITION;
float3 normal : NORMAL;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float3 normal : NORMAL;
};
////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType TerrainVertexShader(VertexInputType input)
{
PixelInputType output;
input.position.y = input.position.y + terrain(input.position.x,input.position.z);
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
// Calculate the normal vector against the world matrix only.
output.normal = mul(input.normal, (float3x3)worldMatrix);
// Normalize the normal vector.
output.normal = normalize(output.normal);
return output;
}
Your big problem with generating normals in a shader is that you need knowledge of surrounding vertices. This is something you can overcome with a geometry shader but not with a vertex shader. A simple way to calculate the normal is to calculate the polygon normals (take the vector formed from 2 edges and cross product them for the face normal) of all polys that hit the vertex you are looking at and then add them up and normalise. As such if you haven't got access to a geometry shader the only real solution is to use the CPU. Even then this is not the best way to calculate the vertex normals. You may still find it better to use a more complex algorithm yet and that will give you even more problems! So yeah, CPU or geometry shader ... those are, basically, your options.

Resources