bind a texture before draw it (webgl) - webgl

My code works but i am wondering why !
I have 2 textures :
uniform sampler2D uSampler0;
uniform sampler2D uSampler1;
void main() {
vec4 color0 = texture2D(uSampler0, vTexCoord);
vec4 color1 = texture2D(uSampler1, vTexCoord);
gl_FragColor = color0 * color1;
}
and my js code
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,my_texture_ZERO);
gl.uniform1i(program.uSampler0,0);
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D,my_texture_ONE);
gl.uniform1i(program.uSampler1);
// uncomment one of the 3, it works.
// gl.bindTexture(gl.TEXTURE_2D, my_texture_ZERO);
// gl.bindTexture(gl.TEXTURE_2D, my_texture_ONE);
// gl.bindTexture(gl.TEXTURE_2D, texture_FOR_PURPOSE_ONLY);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
before gl.draw, i have tested the 3 bindings,
each one works !
So, i do not understand the real pipeline underlying .
Thanks for some explanations

This line is invalid
gl.uniform1i(program.uSampler1);
You're not passing a value to the sampler
The way WebGL texture units work is they are global state inside WebGL
gl.activeTexture sets the texture unit all other texture commands effect. For each texture unit there are 2 bind points, TEXTURE_2D and TEXTURE_CUBE_MAP.
You can think of it like this
gl = {
activeTextureUnit: 0,
textureUnits: [
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
{ TEXTURE_2D: null: TEXTURE_CUBE_MAP: null, },
...
],
};
gl.activeTexture just does this
gl.activeTexture = function(unit) {
gl.activeTextureUnit = unit - gl.TEXTURE0;
};
gl.bindTexture does this
gl.bindTexture = function(bindPoint, texture) {
gl.textureUnits[gl.activeTextureUnit][bindPoint] = texture;
};
gl.texImage2D and gl.texParamteri look up which texture to work with like this
gl.texImage2D = function(bindPoint, .....) {
var texture = gl.textureUnits[gl.activeTextureUnit][bindPoint];
// now do something with texture
In other words, inside WebGL there is a global array of texture units. gl.activeTexture and gl.bindTexture manipulate that array.
gl.texXXX manipulate the textures themselves but they reference the textures indirectly through that array.
gl.uniform1i(someSamplerLocation, unitNumber) sets the shader's uniform to look at a particular index in that array of texture units.

It's working correctly because in the presented code you are sending the appropriate uniforms for the samplers.
First texture was set to unit 0 by calling glActiveTexture(GL_TEXTURE0) and was bound afterward. Then a switch was made to unit1.
At that point there were two separate bound textures in each unit.
At the end these units were passed as the uniforms for samplers - which is how to indicate which texture should be in a sampler: in this case passing 0 corresponding to the GL_TEXTURE0 unit to the first uniform and analogousy for the second uniform.
Probably even without uncommenting these lines - things should work.

Related

how to bind a player position to a shader with melonjs?

I've created a glsl shader as:
<script id="player-fragment-shader" type="x-shader/x-fragment">
precision highp float;
varying vec3 fNormal;
uniform vec2 resolution;
float circle(in vec2 _pos, in float _radius) {
vec2 dist = _pos - vec2(0.5);
return 1.-smoothstep(_radius - (_radius * 0.5),
_radius + (_radius * 0.5),
dot(dist, dist) * 20.0);
}
void main() {
vec2 pos = gl_FragCoord.xy/resolution.xy;
// Subtract the inverse of orange from white to get an orange glow
vec3 color = vec3(circle(pos, 0.8)) - vec3(0.0, 0.25, 0.5);
gl_FragColor = vec4(color, 0.8);
}
</script>
<script id="player-vertex-shader" type="x-shader/x-vertex">
precision highp float;
attribute vec3 position;
attribute vec3 normal;
uniform mat3 normalMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
void main() {
vec4 pos = modelViewMatrix * vec4(position, 0.25);
gl_Position = projectionMatrix * pos;
}
</script>
I initialize it in the game load by running:
var vertShader = document.getElementById("player-vertex-shader").text;
var fragShader = document.getElementById("player-fragment-shader").text;
var shader = me.video.shader.createShader(me.video.renderer.compositor.gl, vertShader, fragShader);
This is done after video is initialized, and seems to compile the shader program and load fine. The shader also seems to work fine when loading it up in shaderfrog.com and other similar sites.
The problem is, it's leaving me with a totally black screen until I move the character and it redraws. I've read over the webgl fundamentals site, and it seems what I'm missing is binding the character position to the GL buffer.
How do I do this in melonjs.
Hi, I wrote the original WebGL compositor for melonJS.
tl;dr: Force the frame to redraw by returning true from your character's entity.update() method. (Or alternatively, increase the animation frame rate to match the game frame rate.)
Example overriding the update method:
update: function (dt) {
this._super(me.Entity, "update", [dt]);
return true;
}
This allows the update to continue operating normally (e.g. updating animation state, etc.) but returning true to force the frame to redraw every time.
It might help to understand how the compositor works, and how your shader is interacting with melonJS entities. This describes the inner workings of WebGL integration with melonJS. In short, there is no explicit step to bind positions to the shader. Positions are sent via the vertex attribute buffer, which is batched up (usually for an entire frame) and sent as one big array to WebGL.
The default compositor can be replaced if you need more control over building the vertex buffer, or if you want to do other custom rendering passes. This is done by passing a class reference to me.video.init in the options.compositor argument. The default is me.WebGLRenderer.Compositor:
me.video.init(width, height, {
wrapper: "screen",
renderer : me.video.WEBGL,
compositor: me.WebGLRenderer.Compositor
});
During the draw loop, the default compositor adds a new quad element to the vertex attribute array buffer for every me.WebGLRenderer.drawImage call. This method emulates the DOM canvas method of the same name. The implementation is very simple; it just converts the arguments into a quad and calls the compositor's addQuad method. This is where the vertex attribute buffer is actually populated.
After the vertex attribute buffer has been completed, the flush method is called, which sends the vertex buffer to the GPU with gl.drawElements.
melonJS takes drawing optimization to the extreme. Not only does it batch like-renderables to reduce the number of draw calls (as described above) but it also doesn't send any draw calls if there is nothing to draw. This condition occurs when the frame is identical to the last frame drawn. For example, no entity has moved, the viewport has not scrolled, idle animations have not advanced to the next state, on-screen timer has not elapsed a full second, etc.
It is possible to force the frame to redraw by having any entity in the scene return true from its update method. This is a signal to the game engine that the frame needs to be redrawn. The process is described in more detail on the wiki.

How to manually render a mesh loaded with the DirectX Toolkit

I have a c++/cx project where I'm rendering procedural meshes using DirectX-11, it all seems to work fine, but now I wanted to also import and render meshes from files (from fbx to be exact).
I was told to use the DirectX Toolkit for this.
I followed the tutorials of the toolkit, and that all worked,
but then I tried doing that in my project but it didn't seem to work. The imported mesh was not visible, and the existing procedural meshes were rendered incorrectly (as if without a depth buffer).
I then tried manually rendering the imported mesh (identical to the procedural meshes, without using the Draw function from DirectXTK)
This works better, the existing meshes are all correct, but the imported mesh color's are wrong; I use a custom made vertex and fragment shader, that uses only vertex position and color data, but for some reason the imported mesh's normals are send to shader instead of the vertex-colors.
(I don't even want the normals to be stored in the mesh, but I don't seem to have the option to export to fbx without normals, and even if I remove them manually from the fbx, at import the DirectXTK seem to recalculate the normals)
Does anyone know what I'm doing wrong?
This is all still relatively new to me, so any help appreciated.
If you need more info, just let me know.
Here is my code for rendering meshes:
First the main render function (which is called once every update):
void Track3D::Render()
{
if (!_loadingComplete)
{
return;
}
static const XMVECTORF32 up = { 0.0f, 1.0f, 0.0f, 0.0f };
// Prepare to pass the view matrix, and updated model matrix, to the shader
XMStoreFloat4x4(&_constantBufferData.view, XMMatrixTranspose(XMMatrixLookAtRH(_CameraPosition, _CameraLookat, up)));
// Clear the back buffer and depth stencil view.
_d3dContext->ClearRenderTargetView(_renderTargetView.Get(), DirectX::Colors::Transparent);
_d3dContext->ClearDepthStencilView(_depthStencilView.Get(), D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Set render targets to the screen.
ID3D11RenderTargetView *const targets[1] = { _renderTargetView.Get() };
_d3dContext->OMSetRenderTargets(1, targets, _depthStencilView.Get());
// Here I render everything:
_TrackMesh->Render(_constantBufferData);
RenderExtra();
_ImportedMesh->Render(_constantBufferData);
Present();
}
The Present-function:
void Track3D::Present()
{
DXGI_PRESENT_PARAMETERS parameters = { 0 };
parameters.DirtyRectsCount = 0;
parameters.pDirtyRects = nullptr;
parameters.pScrollRect = nullptr;
parameters.pScrollOffset = nullptr;
HRESULT hr = S_OK;
hr = _swapChain->Present1(1, 0, &parameters);
if (hr == DXGI_ERROR_DEVICE_REMOVED || hr == DXGI_ERROR_DEVICE_RESET)
{
OnDeviceLost();
}
else
{
if (FAILED(hr))
{
throw Platform::Exception::CreateException(hr);
}
}
}
Here's the render function which I call on every mesh:
(All of the mesh-specific data is gotten from the imported mesh)
void Mesh::Render(ModelViewProjectionConstantBuffer constantBufferData)
{
if (!_loadingComplete)
{
return;
}
XMStoreFloat4x4(&constantBufferData.model, XMLoadFloat4x4(&_modelMatrix));
// Prepare the constant buffer to send it to the Graphics device.
_d3dContext->UpdateSubresource(
_constantBuffer.Get(),
0,
NULL,
&constantBufferData,
0,
0
);
UINT offset = 0;
_d3dContext->IASetVertexBuffers(
0,
1,
_vertexBuffer.GetAddressOf(),
&_stride,
&_offset
);
_d3dContext->IASetIndexBuffer(
_indexBuffer.Get(),
DXGI_FORMAT_R16_UINT, // Each index is one 16-bit unsigned integer (short).
0
);
_d3dContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
_d3dContext->IASetInputLayout(_inputLayout.Get());
// Attach our vertex shader.
_d3dContext->VSSetShader(
_vertexShader.Get(),
nullptr,
0
);
// Send the constant buffer to the Graphics device.
_d3dContext->VSSetConstantBuffers(
0,
1,
_constantBuffer.GetAddressOf()
);
// Attach our pixel shader.
_d3dContext->PSSetShader(
_pixelShader.Get(),
nullptr,
0
);
SetTexture();
// Draw the objects.
_d3dContext->DrawIndexed(
_indexCount,
0,
0
);
}
And this is the vertex shader:
cbuffer ModelViewProjectionConstantBuffer : register(b0)
{
matrix model;
matrix view;
matrix projection;
};
struct VertexShaderInput
{
float3 pos : POSITION;
//float3 normal : NORMAL0; //uncommenting these changes the color data for some reason (but always wrong)
//float2 uv1 : TEXCOORD0;
//float2 uv2 : TEXCOORD1;
float3 color : COLOR0;
};
struct VertexShaderOutput
{
float3 color : COLOR0;
float4 pos : SV_POSITION;
};
VertexShaderOutput main(VertexShaderInput input)
{
VertexShaderOutput output;
float4 pos = float4(input.pos, 1.0f);
// Transform the vertex position into projected space.
pos = mul(pos, model);
pos = mul(pos, view);
pos = mul(pos, projection);
output.pos = pos;
output.color = input.color;
return output;
}
And this is the pixel shader:
struct PixelShaderInput
{
float3 color: COLOR0;
};
float4 main(PixelShaderInput input) : SV_TARGET
{
return float4(input.color.r, input.color.g, input.color.b, 1);
}
The most likely issue is that you are not setting enough state for your drawing, and that the DirectX Tool Kit drawing functions are setting states that don't match what your existing code requires.
For performance reasons, DirectX Tool Kit does not 'save & restore' state. Instead each draw function sets the state it needs fully and then leaves it. I document which state is impacted in the wiki under the State management section for each class.
Your code above sets the vertex buffer, index buffer, input layout, vertex shader, pixel shader, primitive topology, and VS constant buffer in slot 0.
You did not set blend state, depth/stencil state, or the rasterizer state. You didn't provide the pixel shader so I don't know if you need any PS constant buffers, samplers, or shader resources.
Try explicitly setting the blend state, depth/stencil state, and rasterizer state before you draw your procedural meshes. If you just want to go back to the defined defaults instead of whatever DirectX Tool Kit did, call:
_d3dContext->RSSetState(nullptr);
_d3dContext->OMSetBlendState(nullptr, nullptr, 0);
_d3dContext->OMSetDepthStencilState(nullptr, 0xffffffff);
See also the CommonStates class.
It's generally not a good idea to use identifiers that start with _ in C++. Officially all identifiers that start with _X where X is a capital letter or __ are reserved for the compiler and library implementers so it could conflict with some compiler stuff. m_ or something similar is better.

OpenGL ES glFragColor depend on if condition on Fragment shader in iOS

I'm writing an app on iOS allows drawing free style (using finger) and drawing image on screen. I use OpenGL ES to implement. I have 2 functions, one is drawing free style, one is drawing texture
--- Code drawing free style
- (void)drawFreeStyle:(NSMutableArray *)pointArray {
//Prepare vertex data
.....
// Load data to the Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_DYNAMIC_DRAW);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);
**GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
glVertexAttrib1f(a_ver_flag_drawing_type, 0.0f);
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
glUniform1f(u_fra_flag_drawing_type, 0.0);**
glUseProgram(program[PROGRAM_POINT].id);
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
--- Code drawing texture
- (void)drawTexture:(UIImage *)image atRect:(CGRect)rect {
GLuint a_ver_flag_drawing_type = glGetAttribLocation(program[PROGRAM_POINT].id, "a_drawingType");
GLuint u_fra_flag_drawing_type = glGetUniformLocation(program[PROGRAM_POINT].id, "v_drawing_type");
GLuint a_position_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_Position");
GLuint a_texture_coordinates_location = glGetAttribLocation(program[PROGRAM_POINT].id, "a_TextureCoordinates");
GLuint u_texture_unit_location = glGetUniformLocation(program[PROGRAM_POINT].id, "u_TextureUnit");
glUseProgram(PROGRAM_POINT);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texName);
glUniform1i(u_texture_unit_location, 0);
glUniform1f(u_fra_flag_drawing_type, 1.0);
const float textrect[] = {-1.0f, -1.0f, 0.0f, 0.0f,
-1.0f, 1.0f, 0.0f, 1.0f,
1.0f, -1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f};
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, sizeof(textrect), textrect, GL_STATIC_DRAW);
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
glVertexAttribPointer(a_position_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(0));
glVertexAttribPointer(a_texture_coordinates_location, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
glEnableVertexAttribArray(a_ver_flag_drawing_type);
glEnableVertexAttribArray(a_position_location);
glEnableVertexAttribArray(a_texture_coordinates_location);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
Notice 2 variables a_ver_flag_drawing_type (attribute) and u_fra_flag_drawing_type (uniform). They're use for setting flags on vertex shader and fragment shader to determine drawing free style or texture on both files
--- Vertex shader
//Flag
attribute lowp float a_drawingType;
//For drawing
attribute vec4 inVertex;
uniform mat4 MVP;
uniform float pointSize;
uniform lowp vec4 vertexColor;
varying lowp vec4 color;
//For texture
attribute vec4 a_Position;
attribute vec2 a_TextureCoordinates;
varying vec2 v_TextureCoordinates;
void main()
{
if (abs(a_drawingType - 1.0) < 0.0001) {
//Draw texture
v_TextureCoordinates = a_TextureCoordinates;
gl_Position = a_Position;
} else {
//Draw free style
gl_Position = MVP * inVertex;
gl_PointSize = pointSize;
color = vertexColor;
}
}
--- Fragment shader
precision mediump float;
uniform sampler2D texture;
varying lowp vec4 color;
uniform sampler2D u_TextureUnit;
varying vec2 v_TextureCoordinates;
uniform lowp float v_drawing_type;
void main()
{
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
}
My idea is setting these flags from drawing code at drawing time. Attribute a_drawingType is used for vertex shader and Uniform v_drawing_type is used for fragment shader. Depending these flags to know draw free style or texture.
But if I run independently, each time just one type (if run drawing free style, comment code config drawing texture on vertex shader and fragment shader file and vice versa) it can draw as I want. If I combine them, it not only can't draw but also makes app crash as
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
I'm new for OpenGL ES and GLSL language, so I'm not sure my thinking about setting flags like that is right or wrong. Can anyone help me
So why don't you just build 2 seperate shader programs and useProgram() on one of them, instead of sending flag values to GL and making expensive conditional branch in vertex and especially fragment shader?
Attributes are per-vertex, uniforms are per-shader program.
You might see a crash if you supplied only one value for an attribute then asked OpenGL to draw, say, 100 points. In that case OpenGL is going to do an out-of-bounds array access when it attempts to fetch the attributes for vertices 2–100.
It'd be more normal to use two separate programs. Conditionals are very expensive on GPUs because GPUs try to maintain one program counter while processing multiple fragments. They're SIMD units. Any time the evaluation of an if differs between two neighbouring fragments you're probably reducing parallelism. The idiom is therefore not to use if statements where possible.
If you switch to a uniform there's a good chance you won't lose any performance through absent parallelism because the paths will never diverge. Your GLSL compiler may even be smart enough to recompile the shader every time you reset the constant, effectively performing constant folding. But you'll be paying for the recompilation every time.
If you just had two programs and switched between them you wouldn't pay the recompilation fee.
It's not completely relevant in this case but e.g. you'd also often see code like:
if (abs(v_drawing_type - 1.0) < 0.0001) {
//Draw texture
gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);
} else {
//Drawing free style
gl_FragColor = color * texture2D(texture, gl_PointCoord);
}
Written more like:
gl_FragColor = mix(texture2D(u_TextureUnit, v_TextureCoordinates),
color * texture2D(texture, gl_PointCoord),
v_drawing_type);
... because that avoids the conditional entirely. In this case you'd want to adjust the texture2D so that both calls were identical, and probably factor them out of the call, to ensure you don't end up always doing two samples instead of one.
The previously posted answers explain certain pieces, but an important part is missing in all of them. It is perfectly legal to specify a single attribute value that is applied to all vertices in a draw call. What you did here was basically valid:
glVertexAttrib1f(a_ver_flag_drawing_type, 1.0f);
The direct problem was this call that followed shortly after:
glEnableVertexAttribArray(a_ver_flag_drawing_type);
There are two main ways to specify the value of a vertex attribute:
Use the current value, as specified in your case by glVertexAttrib1f().
Use values from an array, as specified with glVertexAttribPointer().
You select which of the two options is used for any given attribute by enabling/disabling the array, which is done by calling glEnableVertexAttribArray()/glDisableVertexAttribArray().
In the posted code, the vertex attribute was specified as only a current value, but the attribute was then enabled to fetch from an array with glEnableVertexAttribArray(). This conflict caused the crash, because the attribute values would have been fetched from an array that was never specified. To use the specified current value, the call simply has to be changed to:
glDisableVertexAttribArray(a_ver_flag_drawing_type);
Or, if the array was never enabled, the call could be left out completely. But just in case another part of the code might have enabled it, it's safer to disable it explicitly.
As a side note, the following statement sequence from the first draw function also looks suspicious. glUniform*() sets value on the active program, so this will set a value on the previously active program, not the one specified in the second statement. If you want to set the value on the new program, the order of the statements has to be reversed.
glUniform1f(u_fra_flag_drawing_type, 0.0);
glUseProgram(program[PROGRAM_POINT].id);
On the whole thing, I think there are at least two approaches that are better than the one chosen:
Use separate shader programs for the two different types of rendering. While using a single program with switchable behavior is a valid option, it looks artificial, and using separate programs seems much cleaner.
If you want to stick with a single program, use a single uniform to do the switching, instead of using an attribute and a uniform. You could use the one you already have, but you might just as well make it a boolean while you're at it. So in both the vertex and fragment shader, use the same uniform declaration:
uniform bool v_use_texture;
Then the tests become:
if (v_use_texture) {
Getting the uniform location is the same as before, and you can set the value, which will the be available in both the vertex and fragment shader, with one of:
glUniform1i(loc, 0);
glUniform1i(loc, 1);
I found the problem, just change variable a_drawingType from Attribute to Uniform then use glGetUniformLocation and glUniform1f to get index and pass value. I think Attribute will pass for any vertex so use uniform to pass once.

HLSL 3 Can a Pixel Shader be declared alone?

I've been asked to split the question below into multiple questions:
HLSL and Pix number of questions
This is asking the first question, can I in HLSL 3 run a pixel shader without a vertex shader. In HLSL 2 I notice you can but I can't seem to find a way in 3?
The shader will compile fine, I will then however get this error from Visual Studio when calling SpriteBatch Draw().
"Cannot mix shader model 3.0 with earlier shader models. If either the vertex shader or pixel shader is compiled as 3.0, they must both be."
I don't believe I've defined anything in the shader to use anything earlier then 3. So I'm left a bit confused. Any help would be appreciated.
The problem is that the built-in SpriteBatch shader is 2.0. If you specify a pixel shader only, SpriteBatch still uses its built-in vertex shader. Hence the version mismatch.
The solution, then, is to also specify a vertex shader yourself. Fortunately Microsoft provides the source to XNA's built-in shaders. All it involves is a matrix transformation. Here's the code, modified so you can use it directly:
float4x4 MatrixTransform;
void SpriteVertexShader(inout float4 color : COLOR0,
inout float2 texCoord : TEXCOORD0,
inout float4 position : SV_Position)
{
position = mul(position, MatrixTransform);
}
And then - because SpriteBatch won't set it for you - setting your effect's MatrixTransform correctly. It's a simple projection of "client" space (source from this blog post). Here's the code:
Matrix projection = Matrix.CreateOrthographicOffCenter(0,
GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, 0, 0, 1);
Matrix halfPixelOffset = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
effect.Parameters["MatrixTransform"].SetValue(halfPixelOffset * projection);
You can try the simple examples here. The greyscale shader is a very good example to understand how a minimal pixel shader works.
Basically, you create a Effect under your content project like this one:
sampler s0;
float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0
{
// B/N
//float4 color = tex2D(s0, coords);
//color.gb = color.r;
// Transparent
float4 color = tex2D(s0, coords);
return color;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
You also need to:
Create an Effect object and load its content.
ambienceEffect = Content.Load("Effects/Ambient");
Call your SpriteBatch.Begin() method passing the Effect object you want to use
spriteBatch.Begin( SpriteSortMode.FrontToBack,
BlendState.AlphaBlend,
null,
null,
null,
ambienceEffect,
camera2d.GetTransformation());
Inside the SpriteBatch.Begin() - SpriteBatch.End() block, you must call the Technique inside the Effect
ambienceEffect.CurrentTechnique.Passes[0].Apply();

Is it important to call glDisableVertexAttribArray()?

I'm not entirely clear on the scope of enabling vertex attrib arrays. I've got several different shader programs with differing numbers of vertex attributes. Are glEnableVertexAttribArray calls local to a shader program, or global?
Right now I'm enabling vertex attrib arrays when I create the shader program, and never disabling them, and all seems to work, but it seems like I'm possibly supposed to enable/disable them right before/after draw calls. Is there an impact to this?
(I'm in WebGL, as it happens, so we're really talking about gl.enableVertexAttribArray and gl.disableVertexAttribArray. I'll note also that the orange book, OpenGL Shading Language, is quite uninformative about these calls.)
The state of which Vertex Attribute Arrays are enabled can be either bound to a Vertex Array Object (VAO), or be global.
If you're using VAOs, then you should not disable attribute arrays, as they are encapsulated in the VAO.
However for global vertex attribute array enabled state you should disable them, because if they're left enabled OpenGL will try to read from arrays, which may be bound to a invalid pointer, which may either crash your program if the pointer is to client address space, or raise a OpenGL error if it points out of the limits of a bound Vertex Buffer Object.
WebGL is not the same as OpenGL.
In WebGL leaving arrays enabled is explicitly allowed as long as there is a buffer attached to the attribute and (a) if it's used it's large enough to satisfy the draw call or (b) it's not used.
Unlike OpenGL ES 2.0, WebGL doesn't allow client side arrays.
Proof:
const gl = document.querySelector("canvas").getContext("webgl");
const vsUses2Attributes = `
attribute vec4 position;
attribute vec4 color;
varying vec4 v_color;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
v_color = color;
}
`;
const vsUses1Attribute = `
attribute vec4 position;
varying vec4 v_color;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
v_color = vec4(0,1,1,1);
}
`
const fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
const program2Attribs = twgl.createProgram(gl, [vsUses2Attributes, fs]);
const program1Attrib = twgl.createProgram(gl, [vsUses1Attribute, fs]);
function createBuffer(data) {
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW);
return buf;
}
const buffer3Points = createBuffer([
-0.7, 0.5,
0.0, 0.5,
0.7, 0.5,
]);
const buffer3Colors = createBuffer([
1, 0, 0, 1,
0, 1, 0, 1,
0, 0, 1, 1,
]);
const buffer9Points = createBuffer([
-0.8, -0.5,
-0.6, -0.5,
-0.4, -0.5,
-0.2, -0.5,
0.0, -0.5,
0.2, -0.5,
0.4, -0.5,
0.6, -0.5,
0.8, -0.5,
]);
// set up 2 attributes
{
const posLoc = gl.getAttribLocation(program2Attribs, 'position');
gl.enableVertexAttribArray(posLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
const colorLoc = gl.getAttribLocation(program2Attribs, 'color');
gl.enableVertexAttribArray(colorLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Colors);
gl.vertexAttribPointer(colorLoc, 4, gl.FLOAT, false, 0, 0);
}
// draw
gl.useProgram(program2Attribs);
gl.drawArrays(gl.POINTS, 0, 3);
// setup 1 attribute (don't disable the second attribute
{
const posLoc = gl.getAttribLocation(program1Attrib, 'position');
gl.enableVertexAttribArray(posLoc);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer9Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
}
// draw
gl.useProgram(program1Attrib);
gl.drawArrays(gl.POINTS, 0, 9);
const err = gl.getError();
console.log(err ? `ERROR: ${twgl.glEnumToString(gl, err)}` : 'no WebGL errors');
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<p>
1st it draws 3 points (3 vertices, 2 attributes)<br/>
2nd it draws 9 points (9 vertices, 1 attribute)<br/>
It does NOT call gl.disableVertexAttrib so on the second draw call one of the attributes is still enabled. It is pointing to a buffer with only 3 vertices in it even though 9 vertices will be drawn. There are no errors.
</p>
<canvas></canvas>
Another example, just enable all the attributes, then draw with a shader that uses no attributes (no error) and also draw with a shader that uses 1 attribute (again no error), no need to call gl.disbleVertexAttribArray
const gl = document.querySelector("canvas").getContext("webgl");
const vsUses1Attributes = `
attribute vec4 position;
void main() {
gl_Position = position;
gl_PointSize = 20.0;
}
`;
const vsUses0Attributes = `
void main() {
gl_Position = vec4(0, 0, 0, 1);
gl_PointSize = 20.0;
}
`
const fs = `
precision mediump float;
void main() {
gl_FragColor = vec4(1, 0, 0, 1);
}
`;
const program0Attribs = twgl.createProgram(gl, [vsUses0Attributes, fs]);
const program1Attrib = twgl.createProgram(gl, [vsUses1Attributes, fs]);
function createBuffer(data) {
const buf = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buf);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW);
return buf;
}
const buffer3Points = createBuffer([
-0.7, 0.5,
0.0, 0.5,
0.7, 0.5,
]);
const buffer0Points = createBuffer([]);
// enable all the attributes and bind a buffer to them
const maxAttrib = gl.getParameter(gl.MAX_VERTEX_ATTRIBS);
for (let i = 0; i < maxAttrib; ++i) {
gl.enableVertexAttribArray(i);
gl.vertexAttribPointer(i, 2, gl.FLOAT, false, 0, 0);
}
gl.useProgram(program0Attribs);
gl.drawArrays(gl.POINTS, 0, 1);
gl.useProgram(program1Attrib);
const posLoc = gl.getAttribLocation(program1Attrib, 'position');
gl.bindBuffer(gl.ARRAY_BUFFER, buffer3Points);
gl.vertexAttribPointer(posLoc, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.POINTS, 0, 3);
const err = gl.getError();
console.log(err ? `ERROR: ${twgl.glEnumToString(gl, err)}` : 'no WebGL errors');
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<p>
1st it enables all attributes<br/>
2nd it draws 1 point that needs no attributes (no error)<br/>
3rd it draws 3 points that need 1 attribute (no error)<br/>
It does NOT call gl.disableVertexAttribArray on any of the attributes so they are all still enabled. There are no errors.
</p>
<canvas></canvas>
For webGL I'm going to go with yes, it is important to call gl.disableVertexAttribArray.
Chrome was giving me this warning:
WebGL: INVALID_OPERATION: drawElements: attribs not setup correctly
This was happening when the program changed to one using less than the maximum number of attributes. Obviously the solution was to disable the unused attributes before drawing.
If all your programs use the same number of attributes, you may well get away with calling gl.enableVertexAttribArray once on initialization. Otherwise you'll need to manage them when you change programs.
Think of it as attributes are local to a VAO and not a shader program. the VBOs are in GPU memory.
Now consider that, in WebGL, there is a default VAO that WebGL uses by default. (it can also be a VAO created by the programmer, the same concept applies). This VAO contains a target called ARRAY_BUFFER to which any VBO in the GPU memory can bound. This VAO also contains and attribute array with a fixed number of attribute slots (number depends on implementation and platform, here lets say 8 which is the minimum required by the Webgl specification). Also, this VAO will have ELEMENT_ARRAY_BUFFER target to which any index data buffer can be bound.
Now, when you create a shader program, it will have the attributes you specify. Webgl will assign one of the possible attribute slot "numbers" to all of the attributes specified in the program when you link the shader program. now attributes will use the corresponding attribute slots in the VAO to access the data bound to the ARRAY_BUFFER or ELEMENT_ARRAY_BUFFER targets in the VAO. Now, when you use functions gl.enableVertexAttribArray(location) and gl.vertexAttribPointer(location,....) you are not changing any characteristics of the attributes in the shader program (they simply have a attribute number which refers to one of the attribute slots in the VAO that they will use to access data). What you are actually doing is modifying the state of the attribute slot in the VAO using its location number. SO then for the attributes in the program to be able to access the data, its corresponding attribute slot in the VAO must be enabled (gl.enableVertexAttribArray()). And we have to configure the attribute slot so it can read the data from the buffer bound to the ARRAY_BUFFER correctly (gl.vertexAttribPointer()). one a VBO is set for a slot it wont change, even if we unbind it from the target the attribute slot csn still red from the VBO as long as it is there in GPU memory. Also, there must be some buffer bound to the targets of the VAO (gl.bindBuffer()). So gl.enableVertexAttribArray(location) will enable the attribute slot specified by 'location' in the current VAO. gl.disableVertexAttribArray(location) will disable it. It has nothing to do with the shader program though. Even if you uses a different shader program, these attribute slots's state wont be affected.
So, if two different shader programs use the same attribute slots, there wont be any error because the corresponding attribute slots in the VAO is already active. but the data from the targets might be read incorrectly if the attributes are required interpret the data differently in the two shader programs. Now consider, if the two shader programs use different attribute slots, then you might enable the second shaders program's required attribute slots and think that your program should work. But the already enabled attribute slots (which were enabled by the previous shader program) will still be enabled but wont be used. This causes an error.
So when changing shader programs, we must ensure that the enabled attribute slots in the VAO that wont be used by this shader program must be disabled. Although we might now specify any VAOs explicitly, Webgl works like this by default.
One way is to maintain a list of enabled attributes on the javascript side and disable all enabled attribute slots when switching program while still using the same VAO. Another way to deal with this problem is to create custom VAOs that is accessed by one shader program only. but it is less efficient. Yet another way is to binding attribute locations to fixed slots before the shader program is linked using gl.bindAttribLocation().

Resources