Sample multiple textures and render to multiple textures in same pixel shader - xna

My overall goal is to be able to create a pixel shader that takes multiple textures as input, and renders to multiple targets.
As well as via an initialise and finalise shader, through repeated runs of this shader i will get my result.
I've created shaders with multiple input textures before, and shaders that render to multiple targets, but i've never combined the 2.
What i belive is causing issues is my lack of full understanding of semantics, and how to properly set up input and output textures.
I've seen several different ways of getting input textures and am getting confused as to how it should be set up.
Below is the code for a shared struct that is output by the init and itterate shaders (the finalise shader simply outputs a colour):
struct FRACTAL_OUTPUT
{
float4 IterationsAndControl : COLOR0;
float4 Variables1And2 : COLOR1;
float4 Variables3And4 : COLOR2;
};
Below is the texture declarations for the itterate and finalise shaders (the init shader doesn't use any textures):
Texture2D IterationsAndControl;
sampler IterationsAndControlSampler : register(s4)
{
Texture = <IterationsAndControl>;
};
Texture2D Variables1And2;
sampler Variables1And2Sampler : register(s5)
{
Texture = <IterationsAndControl>;
};
Texture2D Variables3And4;
sampler Variables3And4Sampler : register(s6)
{
Texture = <IterationsAndControl>;
};
In c# XNA code, i set the render targets (by doing GraphicsDevice.SetRenderTargets() then set the texture parameters (by doing Effect.Parameter["TextureVariableName"].SetValue(), then draw a quad (via a sprite batch).
Any help would be much appreciated, as i can't find any examples of doing something like this.

For anyone else who's interested, i've managed (through a lot of trial and error!) to get this working.
Since i'm doing number crunching i set my render targets format to Vector4 (i previously had rgba64 which didn't seem to work).
Because i'm now using a vector format i also had to change the device samplers to be SamplerState.Point:
GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp;
GraphicsDevice.SamplerStates[1] = SamplerState.PointClamp;
GraphicsDevice.SamplerStates[2] = SamplerState.PointClamp;
GraphicsDevice.SamplerStates[3] = SamplerState.PointClamp;
My FRACTAL_OUTPUT struct hasn't changed.
What has changed is the samplers. I'm using the macro defined in the XNA effect samplers:
#define DECLARE_TEXTURE(Name, index) \
Texture2D<float4> Name : register(t##index); \
sampler Name##Sampler : register(s##index);
Then, to sample multiple textures i'm doing the following:
DECLARE_TEXTURE(IterationsAndControl, 1);
....
return tex2D(IterationsAndControlSampler, texCoord).x;
Then in c# I set the render targets in the usual way, setting my input textures as i'd set any other effect parameter.

Related

iOS Metal – reading old values while writing into texture

I have a kernel function (compute shader) that reads nearby pixels of a pixel from a texture and based on the old nearby-pixel values updates the value of the current pixel (it's not a simple convolution).
I've tried creating a copy of the texture using BlitCommandEncoder and feeding the kernel function with 2 textures - one read-only and another write-only. Unfortunately, this approach is GPU-wise time consuming.
What is the most efficient (GPU- and memory-wise) way of reading old values from a texture while updating its content?
(Bit late but oh well)
There is no way you could make it work with only one texture, because the GPU is a highly parallel processor: Your kernel that you wrote for a single pixel gets called in parallel on all pixels, you can't tell which one goes first.
So you definitely need 2 textures. The way you probably should do it is by using 2 textures where one is the "old" one and the other the "new" one. Between passes, you switch the role of the textures, now old is new and new is old. Here is some pseudoswift:
var currentText = MTLTexture()
var nextText = MTLTexture()
let semaphore = dispatch_semaphore_create(1)
func update() {
dispatch_semaphore_wait(semaphore) // Wait for updating done signal
let commands = commandQueue.commandBuffer()
let encoder = commands.computeCommandEncoder()
encoder.setTexture(currentText, atIndex: 0)
encoder.setTexture(nextText, atIndex: 1)
encoder.dispatchThreadgroups(...)
encoder.endEncoding()
// When updating done, swap the textures and signal that it's done updating
commands.addCompletionHandler {
swap(&currentText, &nextText)
dispatch_semaphore_signal(semaphore)
}
commands.commit()
}
I have written plenty of iOS Metal code that samples (or reads) from the same texture it is rendering into. I am using the render pipeline, setting my texture as the render target attachment, and also loading it as a source texture. It works just fine.
To be clear, a more efficient approach is to use the color() attribute in your fragment shader, but that is only suitable if all you need is the value of the current fragment, not any other nearby positions. If you need to read from other positions in the render target, I would just load the render target as a source texture into the fragment shader.

Firemonkey does strange, bizarre things with Alpha

Working with Delphi / Firemonkey XE8. Had some decent luck with it recently, although you have to hack the heck out of it to get it to do what you want. My current project is to evaluate it's Low-Level 3D capabilities to see if I can use them as a starting point for a Game Project. I also know Unity3D quite well, and am considering using Unity3D instead, but I figure that Delphi / Firemonkey might give me some added flexibility in my game design because it is so minimal.
So I decided to dig into an Embarcadero-supplied sample... specifically the LowLevel3D sample. This is the cross-platform sample that shows you how to do nothing other than draw a rotating square on the screen with some custom shaders of your choice and have it look the same on all platforms (although it actually doesn't work AT ALL the same on all platforms... but we won't get into that).
Embc does not supply the original uncompiled shaders for the project (which I might add is really lame), and requires you to supply your own compatible shaders (some compiled, some not) for the various platforms you're targeting (also lame)...so my first job has been to create a shader that would work with their existing sample that does something OTHER than what the sample already does. Specifically, if I'm creating a 2D game, I wanted to make sure that I could do sprites with alpha transparency, basic stuff.... if I can get this working, I'll probably never have to write another shader for the whole game.
After pulling my hair out for many hours, I came up with this little shader that works with the same parameters as the demo.
Texture2D mytex0: register(t0);
Texture2D mytex1: register(t1);
float4 cccc : register(v0) ;
struct PixelShaderInput
{
float4 Pos: COLOR;
float2 Tex: TEXCOORDS;
};
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState MyCull {
FrontCounterClockwise = FALSE;
};
float4 main(PixelShaderInput pIn): SV_TARGET
{
float4 cc,c;
float4 ci = mytex1.Sample(g_samLinear, pIn.Tex.xy);
c = ci;
c.a = 0;//<----- DOES NOT actually SET ALPHA TO ZERO ... THIS IS A PROBLEM
cc = c;
return cc;
}
Never-mind that it doesn't actually do much with the parameters, but check out the line where I set the output's ALPHA to 0. Well.... I found that this actually HAS NO EFFECT!
But it gets spookier than this. I found that turning on CULLING in the Delphi App FIXED this issue. So I figure... no big deal then, I'll just manually draw both sides of the sprite... right? Nope! When I manually drew a double sided sprite.. the problem came back!
Check this image: shader is ignoring alpha=0 when double-sided
In the above picture, clearly alpha is SOMEWHAT obeyed because the clouds are not surrounded by a black box, however, the cloud itself is super saturated (I find that if I multiply rgb*a, then the colors come out approximately right, but I'm not going to do that in real-life for obvious reasons.
I'm new to the concept of writing custom shaders. Any insight is appreciated.

ios, opengl es2.0., using multiple textures, but only get one active texture unit

I'm developing a opengl es application for ios.
I'm trying to blend two textures in my shader, but I always get only one active texture unit.
I have generated two texture, and linked them with two "sampler2D" from the fragment shader.
I set them to unit 0 and 1 by using glUniform1f();
And I have bind the textures using a loop
for (int i = 0; i < 2; i++)
{
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, textures[i]);
}
But when I draw the opengl frame, only one unit is active. like in the picture below
So, what I've been doing wrong?
The way I read the output of that tool (I have not used it), the left pane shows the currently active texture unit. There is always exactly one active texture unit, corresponding to your last call of glActiveTexture(). This means that after you call:
glActiveTexture(GL_TEXTURE0 + i);
the value in the left circled field will be the value of i.
The right pane shows the textures bound to each texture unit. Since you bound textures to unit 0 and 1 with the loop shown in your question, it shows a texture (with id 201) bound to texture unit 0, and a texture (with id 202) bound to texture unit 1.
So as far as I can tell, the state shown in the screenshot represents exactly what you set based on your description and code fragment.
Based on the wording in your question, you might be under the impression that glActiveTexture() enables texture units. That is not the case. glActiveTexture() only specifies which texture unit subsequent glBindTexture() calls operate on.
Which textures are used is then determined by the values you set for the sampler uniforms of your shader program, and by the textures you bound to the corresponding texture units. The value of the currently active texture unit has no influence on the draw call, only on texture binding.

Getting the color of the back buffer in GLSL

I am trying to extract the color behind my shader fragment. I have searched around and found various examples of people doing this as such:
vec2 position = ( gl_FragCoord.xy / u_resolution.xy );
vec4 color = texture2D(u_backbuffer, v_texCoord);
This makes sense. However nobody has shown an example where you pass in the back buffer uniform.
I tried to do it like this:
int backbuffer = glGetUniformLocation(self.shaderProgram->program_, "u_backbuffer");
GLint textureID;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &textureID);//tried both of these one at a time
glGetIntegerv(GL_RENDERBUFFER_BINDING, &textureID);//tried both of these one at a time
glUniform1i(backbuffer, textureID);
But i just get black. This is in cocos2d iOS FYI
Any suggestions?
You can do this, but only on iOS 6.0. Apple added an extension called GL_EXT_shader_framebuffer_fetch which lets you read the contents of the current framebuffer at the fragment you're rendering. This extension introduces a new variable, called gl_lastFragData, which you can read in your fragment shader.
This question by RayDeeA shows en example of this in action, although you'll need to change the name of the extension as combinatorial points out in their answer.
This should be supported on all devices running iOS 6.0 and is a great way to implement custom blend modes. I've heard that it's a very low cost operation, but haven't done much profiling myself yet.
That is not allowed. You cannot simultaneously sample from an image that you're currently writing to as part of an FBO.

Re-use of texture unit for different texture types breaks in chrome

check out the following test:
http://binks.knobbits.org/webgl/texture3.html
It's a simple test of cube textures. It also has a 2D texture in there for good measure.
I discovered that in some browsers (so far, chrome) The image is not displayed properly if I re-use the same texture unit for drawing the cube texture as for the 2D texture.
There is a checkbox at the bottom marked "Use separate texture units for the cube texture on the sphere and the 2D texture on the floor" that shows this.
Is this a bug in chrome or in my code?
I don't see anything wrong with your code but
1) You can't use the same texture for 2 different targets. In other words you can't do this
tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
2) You can't use both a TEXTURE_2D and a CUBE_MAP on a texture unit AT THE SAME TIME.
You can assign both, but when you render you're only allowed to reference one of them in your shaders. In other words.
gl.activeTexture(gl.TEXTURE0);
tex1 = gl.createTexture();
tex2 = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex1);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex2);
Is okay but a shader that tried use both textures from texture unit 0 would fail.
I have ordered a bit the code of the drawing functions and now are working.
Square:
TexturedSquare.prototype.draw = function() {
gl.bindBuffer(gl.ARRAY_BUFFER,this.v);
gl.enableVertexAttribArray(gl.va_vertex);
gl.enableVertexAttribArray(gl.va_normal);
gl.enableVertexAttribArray(gl.va_tex1pos);
gl.vertexAttribPointer(gl.va_vertex,4,gl.FLOAT,false,10*4,0);
gl.vertexAttribPointer(gl.va_normal,4,gl.FLOAT,false,10*4,4*4);
gl.vertexAttribPointer(gl.va_tex1pos,2,gl.FLOAT,false,10*4,4*8);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,this.texture);
gl.bindTexture(gl.TEXTURE_CUBE_MAP,null);
gl.uniform1i(shader.textures,1);
gl.uniform1i(shader.texture1,0);
gl.uniform1i(shader.cube_textures,0);
gl.uniform1i(shader.cubeTexture0,1);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,this.e);
gl.drawElements(gl.TRIANGLES,this.l,gl.UNSIGNED_SHORT,0);
gl.disableVertexAttribArray(gl.va_tex1pos);
}
Sphere:
GLHTexturedSphere.prototype.draw = function() {
gl.bindBuffer(gl.ARRAY_BUFFER,this.vbuf);
gl.enableVertexAttribArray(gl.va_vertex);
gl.enableVertexAttribArray(gl.va_normal);
gl.enableVertexAttribArray(this.va_cubetex0pos);
gl.vertexAttribPointer(gl.va_vertex,4,gl.FLOAT,false,8*4,0);
gl.vertexAttribPointer(gl.va_normal,4,gl.FLOAT,false,8*4,4*4);
gl.vertexAttribPointer(this.va_cubetex0pos,4,gl.FLOAT,false,8*4,4*4);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,null);
gl.bindTexture(gl.TEXTURE_CUBE_MAP,this.texture);
gl.uniform1i(shader.textures,0);
gl.uniform1i(shader.texture1,1);
gl.uniform1i(shader.cube_textures,1);
gl.uniform1i(shader.cubeTexture0,0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,this.ebuf);
gl.drawElements(gl.TRIANGLES,this.l,gl.UNSIGNED_SHORT,0);
gl.disableVertexAttribArray(gl.va_cubetex0pos);
}
Both of them are using now TEXTURE0. Please check WebGL states and uniform values.
Original code is a bit hard for me, sorry. But I think the problem is that texture1 and cubeTexture0 uniforms are been setted with the same value.

Resources