How to rotate texture in webGL - webgl

I am capturing texture from a web cam into webGl in the browser, its rotation is normal when I display in the browser using the Video element, but when it gets in to webGL the texture is rotated -90 degrees.
How can I rotate the texture 90 degrees so it goes back to normal orientation?
function updateTexture(gl, texture, video) {
const level = 0;
const internalFormat = gl.RGBA;
const srcFormat = gl.RGBA;
const srcType = gl.UNSIGNED_BYTE;
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat,
srcFormat, srcType, video);
}

Related

Msaa in Webgl 2.0 - Perform Msaa on a Quad

I try to perform MSAA on a framebuffer, And in the standalone version where i draw a cube to the framebuffer and blit that framebuffer to the canvas it works like a charm:
var gl = canvas.getContext("webgl2", {
antialias: false
});
const frambuffer = gl.createFramebuffer();
const renderbuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, renderbuffer);
gl.renderbufferStorageMultisample(gl.RENDERBUFFER, gl.getParameter(gl.MAX_SAMPLES), gl.RGBA8, this.width, this.height);
gl.bindFramebuffer(gl.FRAMEBUFFER, frambuffer);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, renderbuffer);
.. Prepare scene
gl.bindFramebuffer(gl.FRAMEBUFFER, frambuffer);
.. Draw scene
gl.bindFramebuffer(gl.READ_FRAMEBUFFER, frambuffer);
gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, null);
gl.clearBufferfv(gl.COLOR, 0, [1.0, 1.0, 1.0, 1.0]);
gl.blitFramebuffer( 0, 0, canvas.width, canvas.height,
0, 0, canvas.width, canvas.height,
gl.COLOR_BUFFER_BIT, gl.LINEAR);
But when i do this in my engine with a deferred pipeline the blit is performed but the MultiSample (MSAA) not. The difference i can think of is that i am there writing an image drawn to a quad to the framebuffer and in the working example a cube.
as requested,In the case it is not working the setup is like this:
var gl = canvas.getContext("webgl2", {
antialias: false
});
.. Load resources ..
.. Prepare renderpasses ..
shadow_depth for every light
deferred scene
ssao
shadow for first light
convolution on ssao and shadow
convolution
uber for every light
tonemap
msaa
..
.. draw renderpasses ..
deferred scene
ssao
shadow for first light
convolution on ssao and shadow
convolution
uber for every light
tonemap
...
const frambuffer = gl.createFramebuffer();
const renderbuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, renderbuffer);
gl.renderbufferStorageMultisample(gl.RENDERBUFFER, gl.getParameter(gl.MAX_SAMPLES), gl.RGBA8, this.width, this.height);
gl.bindFramebuffer(gl.FRAMEBUFFER, frambuffer);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, renderbuffer);
gl.bindFramebuffer(gl.FRAMEBUFFER, frambuffer);
..draw tonemap of scene to quad
gl.bindFramebuffer(gl.READ_FRAMEBUFFER, frambuffer);
gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, null);
gl.clearBufferfv(gl.COLOR, 0, [1.0, 1.0, 1.0, 1.0]);
gl.blitFramebuffer( 0, 0, canvas.width, canvas.height,
0, 0, canvas.width, canvas.height,
gl.COLOR_BUFFER_BIT, gl.LINEAR);
renderbufferStorageMultisample needs to be applied only for the framebuffer object that has the initial 3D content. When doing post-processing, multisampling does not have an effect because only 1 or 2 triangles being rasterized and they span the entire viewport.

In WebGL, can a fragment shader be used to set a LUMINANCE texture?

I'm using WebGL for some image processing, and I'd like my fragment shader to output to a 1 or 2 channel texture. I can attach an RGBA or RGB texture to the framebuffer and output to those successfully. But if I attach a LUMINANCE or LUMINANCE_ALPHA texture to the framebuffer instead, the fb status shows as incomplete and it does not work. Hoping to avoid the unneeded extra texture channels, but not sure if this is possible. Thanks for any suggestions!
If format is changed to gl.RGBA below then it works:
gl.getExtension("OES_texture_float")
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
var format = gl.LUMINANCE;
gl.texImage2D(gl.TEXTURE_2D, 0, format, 512, 512, 0, format, gl.FLOAT, null);
var fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, tex, 0);
if (gl.checkFramebufferStatus(gl.FRAMEBUFFER) != gl.FRAMEBUFFER_COMPLETE) {
alert("framebuffer not complete");
}
In WebGL1 only 3 combinations of framebuffer attachments are guaranteed to work
From the spec, section 6.8
The following combinations of framebuffer object attachments, when all of the attachments are framebuffer attachment complete, non-zero, and have the same width and height, must result in the framebuffer being framebuffer complete:
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture + DEPTH_ATTACHMENT = DEPTH_COMPONENT16 renderbuffer
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture + DEPTH_STENCIL_ATTACHMENT = DEPTH_STENCIL renderbuffer
All other combinations are implementation dependent.
(note: OpenGL ES 2.0 on which WebGL1 is based does not guarantee any combinations to work period 🙄)
In WebGL2 a bunch of format/type combinations are guaranteed to work but LUMINANCE/FLOAT is not one of them.
You can check out https://geotiffjs.github.io/cog-explorer for some examples of how to work with FLOAT and LUMINANCE data. Load the sample "Landsat 8 sample 1" or "Landsat 8 sample 2" and debug through webglrenderer.js.

Hello world example of WebGL parallelism

There are many abstractions around WebGL for running parallel processing it seems, e.g.:
https://github.com/MaiaVictor/WebMonkeys
https://github.com/gpujs/gpu.js
https://github.com/turbo/js
But I am having a hard time understanding what a simple and complete example of parallelism would look like in plain GLSL code for WebGL. I don't have much experience with WebGL but I understand that there are fragment and vertex shaders and how to load them into a WebGL context from JavaScript. I don't know how to use the shaders or which one is supposed to do the parallel processing.
I am wondering if one could demonstrate a simple hello world example of a parallel add operation, essentially this but parallel form using GLSL / WebGL shaders / however it should be done.
var array = []
var size = 10000
while(size--) array.push(0)
for (var i = 0, n = 10000; i < n; i++) {
array[i] += 10
}
I guess I essentially don't understand:
If WebGL runs everything in parallel automatically.
Or if there is a max number of things run in parallel, so if you have 10,000 things, but only 1000 in parallel, then it would do 1,000 in parallel 10 times sequentially.
Or if you have to manually specify the amount of parallelism you want.
If the parallelism goes into the fragment shader or vertex shader, or both.
How to actually implement the parallel example.
First off, WebGL only rasterizes points, lines, and triangles. Using WebGL to do non rasterization (GPGPU) is basically a matter of realizing that the inputs to WebGL are data from arrays and the output, a 2D rectangle of pixels is also really just a 2D array so by creatively providing non graphic data and creatively rasterizing that data you can do non-graphics math.
WebGL is parallel in 2 ways.
it's running on a different processor, the GPU, while it's computing something your CPU is free to do something else.
GPUs themselves compute in parallel. A good example if you rasterize a triangle with 100 pixels the GPU can process each of those pixels in parallel up to the limit of that GPU. Without digging too deeply it looks like an NVidia 1080 GPU has 2560 cores so assuming they are not specialized and assuming the best case one of those could compute 2560 things in parallel.
As for an example all WebGL apps are using parallel processing by points (1) and (2) above without doing anything special.
Adding 10 to 10000 elements though in place is not what WebGL is good at because WebGL can't read from and write to the same data during one operation. In other words, your example would need to be
const size = 10000;
const srcArray = [];
const dstArray = [];
for (let i = 0; i < size; ++i) {
srcArray[i] = 0;
}
for (var i = 0, i < size; ++i) {
dstArray[i] = srcArray[i] + 10;
}
Just like any programming language there is more than one way to accomplish this. The fastest would probably probably be to copy all your values into a texture then rasterize into another texture, looking up from the first texture and writing +10 to the destination. But, there in is one of the issues. Transferring data to and from the GPU is slow so you need to weigh that into whether doing work on the GPU is a win.
Another is just like the limit that you can't read from and write to the same array you also can't randomly access the destination array. The GPU is rasterizing a line, point, or triangle. It's fastest at drawing triangles but that means its deciding which pixels to write to in what order so your problem also has to live with those limits. You can use points to as a way to randomly choose a destination but rendering points is much slower than rendering triangles.
Note that "Compute Shaders" (not yet part of WebGL) add the random access write ability to GPUs.
Example:
const gl = document.createElement("canvas").getContext("webgl");
const vs = `
attribute vec4 position;
attribute vec2 texcoord;
varying vec2 v_texcoord;
void main() {
gl_Position = position;
v_texcoord = texcoord;
}
`;
const fs = `
precision highp float;
uniform sampler2D u_srcData;
uniform float u_add;
varying vec2 v_texcoord;
void main() {
vec4 value = texture2D(u_srcData, v_texcoord);
// We can't choose the destination here.
// It has already been decided by however
// we asked WebGL to rasterize.
gl_FragColor = value + u_add;
}
`;
// calls gl.createShader, gl.shaderSource,
// gl.compileShader, gl.createProgram,
// gl.attachShaders, gl.linkProgram,
// gl.getAttributeLocation, gl.getUniformLocation
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
const size = 10000;
// Uint8Array values default to 0
const srcData = new Uint8Array(size);
// let's use slight more interesting numbers
for (let i = 0; i < size; ++i) {
srcData[i] = i % 200;
}
// Put that data in a texture. NOTE: Textures
// are (generally) 2 dimensional and have a limit
// on their dimensions. That means you can't make
// a 1000000 by 1 texture. Most GPUs limit from
// between 2048 to 16384.
// In our case we're doing 10000 so we could use
// a 100x100 texture. Except that WebGL can
// process 4 values at a time (red, green, blue, alpha)
// so a 50x50 will give us 10000 values
const srcTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, srcTex);
const level = 0;
const width = Math.sqrt(size / 4);
if (width % 1 !== 0) {
// we need some other technique to fit
// our data into a texture.
alert('size does not have integer square root');
}
const height = width;
const border = 0;
const internalFormat = gl.RGBA;
const format = gl.RGBA;
const type = gl.UNSIGNED_BYTE;
gl.texImage2D(
gl.TEXTURE_2D, level, internalFormat,
width, height, border, format, type, srcData);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
// create a destination texture
const dstTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, dstTex);
gl.texImage2D(
gl.TEXTURE_2D, level, internalFormat,
width, height, border, format, type, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
// make a framebuffer so we can render to the
// destination texture
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
// and attach the destination texture
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, dstTex, level);
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData
// to put a 2 unit quad (2 triangles) into
// a buffer with matching texture coords
// to process the entire quad
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
numComponents: 2,
},
texcoord: [
0, 0,
1, 0,
0, 1,
0, 1,
1, 0,
1, 1,
],
});
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.activeTexture, gl.bindTexture, gl.uniformXXX
twgl.setUniforms(programInfo, {
u_add: 10 / 255, // because we're using Uint8
u_srcData: srcTex,
});
// set the viewport to match the destination size
gl.viewport(0, 0, width, height);
// draw the quad (2 triangles)
const offset = 0;
const numVertices = 6;
gl.drawArrays(gl.TRIANGLES, offset, numVertices);
// pull out the result
const dstData = new Uint8Array(size);
gl.readPixels(0, 0, width, height, format, type, dstData);
console.log(dstData);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
Making a generic math processor would require a ton more work.
Issues:
Textures are 2D arrays, WebGL only rasterizes points, lines, and triangles so for example it's much easier to process data that fits into a rectangle than not. In other words if you have 10001 values there is no rectangle that fits an integer number of units. It might be best to pad your data and just ignore the part past the end. In other words a 100x101 texture would be 10100 values. So just ignore the last 99 values.
The example above using 8bit 4 channel textures. It would be easier to use 8bit 1 channel textures (less math) but also less efficient since WebGL can process 4 values per operation.
Because it uses 8bit textures it can only store integer values from 0 to 255. We could switch the texture to 32bit floating point textures. Floating point textures are an optional feature of both WebGL (you need to enable extensions and check they succeeded). Rasterizing to a floating point texture is also an optional feature. Most mobile GPUs as of 2018 do not support rendering to a floating point texture so you have to find creative ways of encoding the results into a format they do support if you want your code to work on those GPUs.
Addressing the source data requires math to convert from a 1d index to a 2d texture coordinate. In the example above since we are converting directly from srcData to dstData 1 to 1 no math is needed. If you needed to jump around srcData you'd need to provide that math
WebGL1
vec2 texcoordFromIndex(int ndx) {
int column = int(mod(float(ndx),float(widthOfTexture)));
int row = ndx / widthOfTexture;
return (vec2(column, row) + 0.5) / vec2(widthOfTexture, heighOfTexture);
}
vec2 texcoord = texcoordFromIndex(someIndex);
vec4 value = texture2D(someTexture, texcoord);
WebGL2
ivec2 texcoordFromIndex(someIndex) {
int column = ndx % widthOfTexture;
int row = ndx / widthOfTexture;
return ivec2(column, row);
}
int level = 0;
ivec2 texcoord = texcoordFromIndex(someIndex);
vec4 value = texelFetch(someTexture, texcoord, level);
Let's say we want to sum every 2 numbers. We might do something like this
const gl = document.createElement("canvas").getContext("webgl2");
const vs = `
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
const fs = `
#version 300 es
precision highp float;
uniform sampler2D u_srcData;
uniform ivec2 u_destSize; // x = width, y = height
out vec4 outColor;
ivec2 texcoordFromIndex(int ndx, ivec2 size) {
int column = ndx % size.x;
int row = ndx / size.x;
return ivec2(column, row);
}
void main() {
// compute index of destination
ivec2 dstPixel = ivec2(gl_FragCoord.xy);
int dstNdx = dstPixel.y * u_destSize.x + dstPixel.x;
ivec2 srcSize = textureSize(u_srcData, 0);
int srcNdx = dstNdx * 2;
ivec2 uv1 = texcoordFromIndex(srcNdx, srcSize);
ivec2 uv2 = texcoordFromIndex(srcNdx + 1, srcSize);
float value1 = texelFetch(u_srcData, uv1, 0).r;
float value2 = texelFetch(u_srcData, uv2, 0).r;
outColor = vec4(value1 + value2);
}
`;
// calls gl.createShader, gl.shaderSource,
// gl.compileShader, gl.createProgram,
// gl.attachShaders, gl.linkProgram,
// gl.getAttributeLocation, gl.getUniformLocation
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
const size = 10000;
// Uint8Array values default to 0
const srcData = new Uint8Array(size);
// let's use slight more interesting numbers
for (let i = 0; i < size; ++i) {
srcData[i] = i % 99;
}
const srcTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, srcTex);
const level = 0;
const srcWidth = Math.sqrt(size / 4);
if (srcWidth % 1 !== 0) {
// we need some other technique to fit
// our data into a texture.
alert('size does not have integer square root');
}
const srcHeight = srcWidth;
const border = 0;
const internalFormat = gl.R8;
const format = gl.RED;
const type = gl.UNSIGNED_BYTE;
gl.texImage2D(
gl.TEXTURE_2D, level, internalFormat,
srcWidth, srcHeight, border, format, type, srcData);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
// create a destination texture
const dstTex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, dstTex);
const dstWidth = srcWidth;
const dstHeight = srcHeight / 2;
// should check srcHeight is evenly
// divisible by 2
gl.texImage2D(
gl.TEXTURE_2D, level, internalFormat,
dstWidth, dstHeight, border, format, type, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
// make a framebuffer so we can render to the
// destination texture
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
// and attach the destination texture
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, dstTex, level);
// calls gl.createBuffer, gl.bindBuffer, gl.bufferData
// to put a 2 unit quad (2 triangles) into
// a buffer
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
position: {
data: [
-1, -1,
1, -1,
-1, 1,
-1, 1,
1, -1,
1, 1,
],
numComponents: 2,
},
});
gl.useProgram(programInfo.program);
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.activeTexture, gl.bindTexture, gl.uniformXXX
twgl.setUniforms(programInfo, {
u_srcData: srcTex,
u_srcSize: [srcWidth, srcHeight],
u_dstSize: [dstWidth, dstHeight],
});
// set the viewport to match the destination size
gl.viewport(0, 0, dstWidth, dstHeight);
// draw the quad (2 triangles)
const offset = 0;
const numVertices = 6;
gl.drawArrays(gl.TRIANGLES, offset, numVertices);
// pull out the result
const dstData = new Uint8Array(size / 2);
gl.readPixels(0, 0, dstWidth, dstHeight, format, type, dstData);
console.log(dstData);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
Note the example above uses WebGL2. Why? Because WebGL2 supports rendering to R8 format textures which made the math easy. One value per pixel instead of 4 values per pixel like the previous example. Of course it also means it's slower but making it work with 4 values would have really complicated the math for computing indices or might have required re-arranging the source data to better match. For example instead of value indices going 0, 1, 2, 3, 4, 5, 6, 7, 8, ... it would be easier to sum every 2 values if they were arranged 0, 2, 4, 6, 1, 3, 5, 7, 8 .... that way pulling 4 out at a time and adding the next group of 4 the values would line up. Yet another way would be to use 2 source textures, put all the even indexed values in one texture and the odd indexed values in the other.
WebGL1 provides both LUMINANCE and ALPHA textures which are also one channel but whether or not you can render to them is an optional feature where as in WebGL2 rendering to an R8 texture is a required feature.
WebGL2 also provides something called "transform feedback". This lets you write the output of a vertex shader to buffer. It has the advantage that you just set the number of vertices you want to process (no need to have the destination data be a rectangle). That also means you can output floating point values (it's not optional like it is for rendering to textures). I believe (though I haven't tested) that it's slower than rendering to textures though.
Since you're new to WebGL might I suggest these tutorials.

Getting normals in webgl for projected surface

I have vertices of some surfaces that I draw on the canvas using drawArrays(gl.TRIANGLES,...). I need to draw these surfaces for a particular camera viewpoint and hence all 3d points are projected into 2d and I download the final image using toDataUrl. Here is the downloaded image:
I used gl.readPixels later to retrieve the data for everypixel.
For all the edge vertices, I have the information for the normals. Just like how I got the color for every pixel in the 2d images, I want to get the normals at every pixel for 2d image. Since I only have the normals at the edge vertices, I decided to render the normals the same way I rendered the above image and decided to use gl.readpixels. This is not working. Here is the relevant code:
This is the function from which the drawOverlayTriangeNormals is called. The drawOverlayTriangles function (not visible in this post) was used to produce the image shown above.
//Saving BIM
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.vertexAttrib1f(shaderProgram.aIsDepth, 0.0);
drawOverlayTriangles();
saveBlob('element');
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.vertexAttrib1f(shaderProgram.aIsDepth, 0.0);
drawOverlayTrianglesNormals();
saveBlob('element');
var pixels = new Uint8Array(glCanvas.width*glCanvas.height*4);
gl.readPixels(0, 0, glCanvas.width, glCanvas.height, gl.RGBA, gl.UNSIGNED_BYTE,pixels);
pixels = new Float32Array(pixels.buffer);
}
This is the drawOverlayTrianglesNormals function:
function drawOverlayTrianglesNormals()
{
if (overlay.numElements <= 0)
return;
//Creating the matrix for normal transform
var normal_matrix = mat4.create();
var u_Normal_Matrix = mat4.create();
mat4.invert(normal_matrix,pMVMatrix);
mat4.transpose(u_Normal_Matrix,normal_matrix);
gl.enable(gl.DEPTH_TEST);
gl.enableVertexAttribArray(shaderProgram.aVertexPosition);
gl.enableVertexAttribArray(shaderProgram.aVertexColor);
gl.enableVertexAttribArray(shaderProgram.aNormal);
gl.vertexAttrib1f(shaderProgram.aIsNormal, 1.0);
//Matrix upload
gl.uniformMatrix4fv(shaderProgram.uMVMatrix, false, pMVMatrix);
gl.uniformMatrix4fv(shaderProgram.uPMatrix, false, perspM);
gl.uniformMatrix4fv(shaderProgram.uNMatrix, false, u_Normal_Matrix);
//Create normals buffer
normals_buffer = gl.createBuffer();
for (var i = 0; i < overlay.numElements; i++) {
// Upload overlay vertices
gl.bindBuffer(gl.ARRAY_BUFFER, overlayVertices[i]);
gl.vertexAttribPointer(shaderProgram.aVertexPosition, 3, gl.FLOAT, false, 0, 0);
// Upload overlay colors
gl.bindBuffer(gl.ARRAY_BUFFER, overlayTriangleColors[i]);
gl.vertexAttribPointer(shaderProgram.aVertexColor, 4, gl.FLOAT, false, 0, 0);
var normal_vertex = [];
//Upload Normals
var normals_element = overlay.elementNormals[i];
for( var j=0; j< overlay.elementNumVertices[i]; j++)
{
var x = normals_element[3*j+0];
var y = normals_element[3*j+1];
var z = normals_element[3*j+2];
var length = Math.sqrt(x*x + y*y + z*z);
normal_vertex[3*j+0] = x/length;
normal_vertex[3*j+1] = y/length;
normal_vertex[3*j+2] = z/length;
}
gl.bindBuffer(gl.ARRAY_BUFFER, normals_buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(normal_vertex),gl.STATIC_DRAW);
gl.vertexAttribPointer(shaderProgram.aNormal, 3, gl.FLOAT, false, 0, 0);
// Draw overlay
gl.drawArrays(gl.TRIANGLES, 0, overlay.elementNumVertices[i]);
}
gl.disableVertexAttribArray(shaderProgram.aVertexPosition);
gl.disableVertexAttribArray(shaderProgram.aVertexColor);
gl.vertexAttrib1f(shaderProgram.aisdepth, 0.0);
}
Below is the relevant vertex shader code:
void main(void) {
gl_PointSize = aPointSize;
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
position_1 = gl_Position;
vColor = aVertexColor;
vIsDepth = aIsDepth;
vIsNormal = aIsNormal;
vHasTexture = aHasTexture;
normals = uNMatrix*vec4(aNormal,1.0);
if (aHasTexture > 0.5)
vTextureCoord = aTextureCoord;
}
Fragment shader:
if (vIsNormal > 0.5)
{
gl_FragColor = vec4(normals.xyz*0.5+0.5,1);
}
}
Right now my output is the same image in grayscale. I am not sure what is going wrong. I felt this method makes sense but seems a little round about.
I'm not entirely sure I understand what you're trying to do, but it seems like you just want to be able to access the normals for calculating lighting effects, so let me try to answer that.
DO NOT use gl.readPixels()! This is primarily for click interactions and stuff, or modifying small numbers of pixels. Using this must be extremely inefficient, since you have to draw the pixels, then read them, then redraw them after calculating their appropriate lighting. The wonderful thing about WebGL is that it allows you to do all this from the beginning: the fragment shader will interpolate the information it's given to smoothly draw effects between two adjacent vertices.
Most of lighting depends comparing the surface normal to the direction of the light (as you seem to understand, judging by one of your comments). See Phong shading.
Now, you mentioned that you want the normals of all the rendered points, not just at the vertices. BUT the vertices' normals will be identical to the normals at every point on the surface, so you don't even need anything more than the vertices' normals. This is because all WebGL knows how to draw is triangles (I believe), which are flat, or planar, surfaces. And since every point on a plane has the same normal as any other point, you only really need one normal to know all of the normals!
Since it looks like all you're trying to draw are cylinders and rectangular prisms, it ought to be simple to specify the normals the objects you create. The normals for the rectangular prisms are trivial, but so are those of the cylinder: the normals are parallel to the line going from the axis of the cylinder to the surface.
And since WebGL's fragment shader interpolates any varying variables you pass it between adjacent vertices, you can tell it to interpolate these normals smoothly across vertices, to achieve the smooth lighting seen in the Phong shadin page! :D

Direct3D 11 not rasterizing any vertices

I'm trying to render a simple triangle on screen using Direct3D 11, but nothing shows up. Here are my vertices:
SimpleVertex vertices[ 3 ] =
{
{ XMFLOAT3( -1.0f, -1.0f, 0.0f ) },
{ XMFLOAT3( 1.0f, -1.0f, 0.0f ) },
{ XMFLOAT3( -1.0f, 1.0f, 0.0f ) },
};
The expected output is a triangle with one point in the top left corner of the screen, one point in the top right corner of the screen, and one point in the bottom left corner of the screen. However, nothing is being rendered anywhere.
I'm not performing any matrix transformations, and the vertex shader just passes the input directly to the output. Everything seems to be set up correctly, and when I use the graphics debugger in Visual Studio 2012, the correct vertex position is being passed to the vertex shader. However, it skips directly from the vertex shader stage to the output merger stage in the pipeline. I assume this means that nothing is being sent to the pixel shader, which would again mean that the vectors are being discarded in the rasterizer stage. Why is this happening?
Here is my rasterizer state:
D3D11_RASTERIZER_DESC rasterizerDesc;
rasterizerDesc.AntialiasedLineEnable = false;
rasterizerDesc.CullMode = D3D11_CULL_NONE;
rasterizerDesc.DepthBias = 0;
rasterizerDesc.DepthBiasClamp = 0.0f;
rasterizerDesc.DepthClipEnable = true;
rasterizerDesc.FillMode = D3D11_FILL_SOLID;
rasterizerDesc.FrontCounterClockwise = false;
rasterizerDesc.MultisampleEnable = false;
rasterizerDesc.ScissorEnable = false;
rasterizerDesc.SlopeScaledDepthBias = 0.0f;
And my viewport (width/height are the window client area matching my back buffer, which are set to 1024x576 in my test setup):
D3D11_VIEWPORT viewport;
viewport.Height = static_cast< float >( height );
viewport.MaxDepth = 1.0f;
viewport.MinDepth = 0.0f;
viewport.TopLeftX = 0.0f;
viewport.TopLeftY = 0.0f;
viewport.Width = static_cast< float >( width );
Can anyone see what is making the rasterize stage drop my vertices? Or are there any other parts of my D3D setup that could be causing this?
i found this on the internet .. it took absolulely ages to load so i copied and pasted i have highlighted in bold an interesting point.
The D3D_OVERLOADS constructors defined in row 11 offers a convenient way for C++ programmers to create transformed and lit vertices with D3DTLVERTEX.
_D3DTLVERTEX(const D3DVECTOR& v, float _rhw, D3DCOLOR _color,
D3DCOLOR _specular, float _tu, float _tv)
{
sx = v.x;
sy = v.y;
sz = v.z;
rhw = _rhw;
color = _color;
specular = _specular;
tu = _tu;
tv = _tv;
}
The system requires a vertex position that has already been transformed. So the x and y values must be in screen coordinates, and z must be the depth value of the pixel, which could be used in a z-buffer (we won't use a z-buffer here). Z values can range from 0.0 to 1.0, where 0.0 is the closest possible position to the viewer, and 1.0 is the farthest position still visible within the viewing area. Immediately following the position, transformed and lit vertices must include an RHW (reciprocal of homogeneous W) value.
Before rasterizing the vertices, they have to be converted from homogeneous vertices to non-homogeneous vertices, because the rasterizer expects them this way. Direct3D converts the homogeneous vertices to non-homogeneous vertices by dividing the x-, y-, and z-coordinates by the w-coordinate, and produces an RHW value by inverting the w-coordinate. This is only done for vertices which are transformed and lit by Direct3D.
The RHW value is used in multiple ways: for calculating fog, for performing perspective-correct texture mapping, and for w-buffering (an alternate form of depth buffering).
With D3D_OVERLOADS defined, D3DVECTOR is declared as
_D3DVECTOR(D3DVALUE _x, D3DVALUE _y, D3DVALUE _z);
D3DVALUE is the fundamental Direct3D fractional data type. It's declared in d3dtypes.h as
typedef float D3DVALUE, *LPD3DVALUE;
The source shows that the x and y values for the D3DVECTOR are always 0.0f (this will be changed in InitDeviceObjects()). rhw is always 0.5f, color is 0xfffffff and specular is set to 0. Only the tu1 and tv1 values are differing between the four vertices. These are the coordinates of the background texture.
In order to map texels onto primitives, Direct3D requires a uniform address range for all texels in all textures. Therefore, it uses a generic addressing scheme in which all texel addresses are in the range of 0.0 to 1.0 inclusive.
If, instead, you decide to assign texture coordinates to make Direct3D use the bottom half of the texture, the texture coordinates your application would assign to the vertices of the primitive in this example are (0.0,0.0), (1.0,0.0), (1.0,0.5), and (0.0,0.5). Direct3D will apply the bottom half of the texture as the background.
Note: By assigning texture coordinates outside that range, you can create certain special texturing effects.
You will find the declaration of D3DTextr_CreateTextureFromFile() in the Framework source in d3dtextr.cpp. It creates a local bitmap from a passed file. Textures could be created from *.bmp and *.tga files. Textures are managed in the framework in a linked list, which holds the info per texture, called texture container.
struct TextureContainer
{
TextureContainer* m_pNext; // Linked list ptr
TCHAR m_strName[80]; // Name of texture (doubles as image filename)
DWORD m_dwWidth;
DWORD m_dwHeight;
DWORD m_dwStage; // Texture stage (for multitexture devices)
DWORD m_dwBPP;
DWORD m_dwFlags;
BOOL m_bHasAlpha;
LPDIRECTDRAWSURFACE7 m_pddsSurface; // Surface of the texture
HBITMAP m_hbmBitmap; // Bitmap containing texture image
DWORD* m_pRGBAData;
public:
HRESULT LoadImageData();
HRESULT LoadBitmapFile( TCHAR* strPathname );
HRESULT LoadTargaFile( TCHAR* strPathname );
HRESULT Restore( LPDIRECT3DDEVICE7 pd3dDevice );
HRESULT CopyBitmapToSurface();
HRESULT CopyRGBADataToSurface();
TextureContainer( TCHAR* strName, DWORD dwStage, DWORD dwFlags );
~TextureContainer();
};
The problem was actually in my rendering logic. I set the stride of the vertex buffer to 0 instead of the size of my vertex struct. Changed that, and it renders just fine!

Resources