WebGl rotation matrix - webgl

I have this very simple rotation program in Webgl to get some understanding of rotation matrix.
http://poly.byethost18.com/pyra1.htm
As you can see the object doesn't turn around actually.
The code for the vertex shader is here:
float angle = radians( theta );
float c = cos( angle );
float s = sin( angle );
mat4 ry = mat4( c, 0.0, -s, 0.0,
0.0, 1.0, 0.0, 0.0,
s, 0.0, c, 0.0,
0.0, 0.0, 0.0, 1.0 );
gl_Position = ry * vPosition;
I guess it's a kinda regular beginners stuff but what could be the cause?

Needed to add
gl.enable(gl.DEPTH_TEST);

Related

WebGL Perspective Projection

I have a scene of a cube with vertices defined like so
const positions = [
-10.0, 10.0,
10.0, 10.0,
-10.0, -10.0,
10.0, -10.0
];
I'm trying to go from a field of view perspective matrix (not shown) to a specified rectangle perspective projection matrix defined like so
const n = 0.1;
const f = 100.0;
const l = -50;
const r = 50;
const t = 50;
const b = -50;
Float32Array([
(2*n/r-l), 0.0, 0.0, 0.0,
0.0, (2*n/t-b), 0.0, 0.0,
(r+l/r-l), (t+b/t-b), -(f+n/f-n), -1.0,
0.0, 0.0, -(2*f*n/f-n), 0.0
]);
I also have a model matrix that moves the box -6 units in the z index so that it's within the bounds of the near and far clip planes.
Am I right to assume that before transforming anything the coordinates I use to specify the box and perspective matrix are in the same space/frame of reference? Therefore the box should be dead center of the view?
The box renders with the field of view matrix, but not the matrix defined above.
The formulas are not exact. Try this (I have added parentheses only):
Float32Array([
(2*n)/(r-l), 0.0, 0.0, 0.0,
0.0, (2*n)/(t-b), 0.0, 0.0,
(r+l)/(r-l), (t+b)/(t-b), -(f+n)/(f-n), -1.0,
0.0, 0.0, -(2*f*n)/(f-n), 0.0
]);

WebGL is there a way to load dynamic buffers in fragment shaders?

I have a fragment shader that can draw an arc based on a set of parameters. The idea was to make the shader resolution independent, so I pass the center of the arc and the bounding radii as pixel values on the screen. You can then just render the shader by setting your vertex positions in the shape of a square. This is the shader:
precision mediump float;
#define PI 3.14159265359
#define _2_PI 6.28318530718
#define PI_2 1.57079632679
// inputs
vec2 center = u_resolution / 2.;
vec2 R = vec2( 100., 80. );
float ang1 = 1.0 * PI;
float ang2 = 0.8 * PI;
vec3 color = vec3( 0., 1.0, 0. );
// prog vars
uniform vec2 u_resolution;
float smOOth = 1.3;
vec3 bkgd = vec3( 0.0 ); // will be a sampler
void main () {
// get the dist from the current pixel to the coord.
float r = distance( gl_FragCoord.xy, center );
if ( r < R.x && r > R.y ) {
// If we are in the radius, do some trig to find the angle and normalize
// to
float theta = -( atan( gl_FragCoord.y - center.y,
center.x - gl_FragCoord.x ) ) + PI;
// This is to make sure the angles are clipped at 2 pi, but if you pass
// the values already clipped, then you can safely delete this and make
// the code more efficinent.
ang1 = mod( ang1, _2_PI );
ang2 = mod( ang2, _2_PI );
float angSum = ang1 + ang2;
bool thetaCond;
vec2 thBound; // short for theta bounds: used to calculate smoothing
// at the edges of the circle.
if ( angSum > _2_PI ) {
thBound = vec2( ang2, angSum - _2_PI );
thetaCond = ( theta > ang2 && theta < _2_PI ) ||
( theta < thetaBounds.y );
} else {
thBound = vec2( ang2, angSum );
thetaCond = theta > ang2 && theta < angSum;
}
if ( thetaCond ) {
float angOpMult = 10000. / ( R.x - R.y ) / smOOth;
float opacity = smoothstep( 0.0, 1.0, ( R.x - r ) / smOOth ) -
smoothstep( 1.0, 0.0, ( r - R.y ) / smOOth ) -
smoothstep( 1.0, 0.0, ( theta - thBound.x )
* angOpMult ) -
smoothstep( 1.0, 0.0, ( thBound.y - theta )
* angOpMult );
gl_FragColor = vec4( mix( bkgd, color, opacity ), 1.0 );
} else
discard;
} else
discard;
}
I figured this way of drawing a circle would yield better quality circles and be less hassle than loading a bunch of vertices and drawing triangle fans, even though it probably isn't as efficient. This works fine, but I don't just want to draw one fixed circle. I want to draw any circle I would want on the screen. So I had an idea to set the 'inputs' to varyings and pass a buffer with parameters to each of the vertices of a given bounding square. So my vertex shader looks like this:
attribute vec2 a_square;
attribute vec2 a_center;
attribute vec2 a_R;
attribute float a_ang1;
attribute float a_ang2;
attribute vec3 a_color;
varying vec2 center;
varying vec2 R;
varying float ang1;
varying float ang2;
varying vec3 color;
void main () {
gl_Position = vec4( a_square, 0.0, 1.0 );
center = a_center;
R = a_R;
ang1 = a_ang1;
ang2 = a_ang2;
color = a_color;
}
'a_square' is just the vertex for the bounding square that the circle would sit in.
Next, I define a buffer for the inputs for one test circle (in JS). One of the problems with doing it this way is that the circle parameters have to be repeated for each vertex, and for a box, this means four times. 'pw' and 'ph' are the width and height of the canvas, respectively.
var circleData = new Float32Array( [
pw / 2, ph / 2,
440, 280,
Math.PI * 1.2, Math.PI * 0.2,
1000, 0, 0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
] );
Then I simply load my data into a gl buffer (circleBuffer) and bind the appropriate attributes to it.
gl.bindBuffer( gl.ARRAY_BUFFER, bkgd.circleBuffer );
gl.vertexAttribPointer( bkgd.aCenter, 2, gl.FLOAT, false, 0 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aCenter );
gl.vertexAttribPointer( bkgd.aR, 2, gl.FLOAT, false, 2 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aR );
gl.vertexAttribPointer( bkgd.aAng1, 1, gl.FLOAT, false, 4 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng1 );
gl.vertexAttribPointer( bkgd.aAng2, 1, gl.FLOAT, false, 5 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng2 );
gl.vertexAttribPointer( bkgd.aColor, 3, gl.FLOAT, false, 6 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aColor );
When I load my page, I do see a circle, but it seems to me that the radii are the only attributes that are actually reflecting any type of responsiveness. The angles, center, and color are not reflecting the values they are supposed to be, and I have absolutely no idea why the radii are the only things that are actually working.
Nonetheless, this seems to be an inefficient way to load arguments into a fragment shader to draw a circle, as I have to reload the values for every vertex of the box, and then the GPU interpolates those values for no reason. Is there a better way to pass something like an attribute buffer to a fragment shader, or in general to use a fragment shader in this way? Or should I just use vertices to draw my circle instead?
If you're only drawing circles you can use instanced drawing to not repeat the info.
See this Q&A: what does instancing do in webgl
Or this article
Instancing lets you use some data per instance, as in per circle.
You can also use a texture to store the per circle data or all data. See this Q&A: How to do batching without UBOs?
Whether either are more or less efficient depends on the GPU/driver/OS/Browser. If you need to draw 1000s of circles this might be efficient. Most apps draw a variety of things so would chose a more generic solution unless they had special needs to draw 1000s of circles.
Also it may not be efficient because you're still calling the fragment shader for every pixel that is in the square but not in the circle. That's 30% more calls to the fragment shader than using triangles and that assumes your code is drawing quads that fit the circles. It looks at a glance that your actual code is drawing full canvas quads which is terribly inefficient.

How to get fragment coordinate in fragment shader in Metal?

This minimal Metal shader pair renders a simple interpolated gradient onto the screen (when provided with a vertex quad/triangle) based on the vertices' color attributes:
#include <metal_stdlib>
using namespace metal;
typedef struct {
float4 position [[position]];
float4 color;
} vertex_t;
vertex vertex_t vertex_function(const device vertex_t *vertices [[buffer(0)]], uint vid [[vertex_id]]) {
return vertices[vid];
}
fragment half4 fragment_function(vertex_t interpolated [[stage_in]]) {
return half4(interpolated.color);
}
…with the following vertices:
{
// x, y, z, w, r, g, b, a
1.0, -1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0,
-1.0, -1.0, 0.0, 1.0, 0.0, 1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0,
1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 1.0
}
So far so good. It renders the well-known gradient triangle/quad.
The one you find in pretty much every single GPU HelloWorld tutorial.
I however need to have a fragment shader that instead of taking the interpolated vertex color computes a color based on the fragments position on screen.
It receives a screen-filling quad of vertices and then uses only the fragment shader to calculate the actual colors.
From my understanding the position of a vertex is a float4 with the first three elements being the 3d vector and the 4th element set to 1.0.
So—I thought—it should be easy to modify the above to have it simply reinterpret the vertex' position as a color in the fragment shader, right?
#include <metal_stdlib>
using namespace metal;
typedef struct {
float4 position [[position]];
} vertex_t;
vertex vertex_t vertex_function(const device vertex_t *vertices [[buffer(0)]], uint vid [[vertex_id]]) {
return vertices[vid];
}
fragment half4 fragment_function(vertex_t interpolated [[stage_in]]) {
float4 color = interpolated.position;
color += 1.0; // move from range -1..1 to 0..2
color *= 0.5; // scale from range 0..2 to 0..1
return half4(color);
}
…with the following vertices:
{
// x, y, z, w,
1.0, -1.0, 0.0, 1.0,
-1.0, -1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 0.0, 1.0,
-1.0, 1.0, 0.0, 1.0,
}
I was quite surprised however to find a uniformly colored (yellow) screen being rendered, instead of a gradient going from red=0.0 to red=1.0 in x-axis and green=0.0 to green=1.0 in x-axis:
(expected render output vs. actual render output)
The interpolated.position appears to be yielding the same value for each fragment.
What am I doing wrong here?
Ps: (While this dummy fragment logic could have easily been accomplished using vertex interpolation, my actual fragment logic cannot.)
The interpolated.position appears to be yielding the same value for
each fragment.
No, the values are just very large. The variable with the [[position]] qualifier, in the fragment shader, is in pixel coordinates. Divide by the render target dimensions, and you'll see what you want, except for having to invert the green value, because Metal's convention is to define the upper-left as the origin for this, not the bottom-left.

OpenGL ES 1 Scaling without changing lighting on object?

I am implementing a 3d obj viewer like app and when I did a scaling on my object in the app (in OpenGL ES 1.x) it became lighter(scale down) and darker (when I scale up).
Is there a way for me to prevent this "changing of lighting" to happen? i.e. the same brightness uniformly through out.
I guess I have to do something to the lighting normals?
My Render method is as below:
void RenderingEngine::Render(const vector<Visual>& visuals, ivec2 screenSize) const
{
glClearColor(0.5f, 0.5f, 0.5f, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
vector<Visual>::const_iterator visual = visuals.begin();
for (int visualIndex = 0; visual != visuals.end(); ++visual, ++visualIndex) {
// Set the viewport transform.
ivec2 size = visual->ViewportSize;
ivec2 lowerLeft = visual->LowerLeft;
glViewport(lowerLeft.x, lowerLeft.y, size.x, size.y);
// Set the light position.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
vec4 lightPosition(0.25, 0.25, 1, 0);
glLightfv(GL_LIGHT0, GL_POSITION, lightPosition.Pointer());
// Set the model-view transform.
mat4 rotation = visual->Orientation.ToMatrix();
mat4 translation = visual->Translate;
mat4 scale;
scale = scale.Scale(visual->Scale);
rotation = rotation * scale;
//mat4 modelview = rotation * m_translation;
mat4 modelview = rotation * m_translation * translation;
glLoadMatrixf(modelview.Pointer());
// Set the projection transform.
float h = 4.0f * size.y / size.x;
mat4 projection = mat4::Frustum(-2, 2, -h / 2, h / 2, 5, 50);
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projection.Pointer());
// Set the diffuse color.
vec3 color = visual->Color * 0.75f;
vec4 diffuse(color.x, color.y, color.z, 1);
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse.Pointer());
// Draw the surface.
int stride = 2 * sizeof(vec3);
const GLvoid* normalOffset = (const GLvoid*) sizeof(vec3);
const Drawable& drawable = m_drawables[visualIndex];
glBindBuffer(GL_ARRAY_BUFFER, drawable.VertexBuffer);
glVertexPointer(3, GL_FLOAT, stride, 0);
glNormalPointer(GL_FLOAT, stride, normalOffset);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, drawable.IndexBuffer);
glDrawElements(GL_TRIANGLES, drawable.IndexCount, GL_UNSIGNED_SHORT, 0);
}
}
First off, apply your rotation and scale matrices BEFORE your tranlation matrix(or matrices). So... instead of this:
rotation = rotation * scale;
//mat4 modelview = rotation * m_translation;
mat4 modelview = rotation * m_translation * translation;
Do this:
rotation = rotation * scale;
//mat4 modelview = rotation * m_translation;
mat4 modelview = m_translation * translation * rotation;
Secondly, why do you have two translation matrices? You should just need one matrix to transition your object vertices from object space into world space.
Third, why are you creating a normal matrix? Unless you are modifying normals for some sheared object post rotate, I don't really see any reason to do so. You can rotate the normals for your object using the same rotation matrix you use for your vertices.
This solution works for me. So I'll be sharing this for any newbies to the subject who run into the same problem as me.
Use:
glEnable(GL_NORMALIZE);
to always normalize the surface normals to unit length.

GLSL: Changing more than two vertex dimensions crashes shader

For testing purposes I am trying to set the position of every vertex to zero. But If I try to change more than two dimensions (and it doesn't matter which), the shader crashes silently. Can anybody clue me into what is going on here? My code:
static const float vertices[12] = {
-0.5,-0.5, 0.0,
0.5,-0.5, 0.0,
-0.5, 0.5, 0.0,
0.5, 0.5, 0.0,
};
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)vertices);
glEnableVertexAttribArray(vertexHandle);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&modelViewProjection.data[0]);
glDrawArrays(GL_POINTS, 0, 4);
And my shader:
attribute vec4 vertexPosition;
uniform mat4 modelViewProjectionMatrix;
void main()
{
vec4 temp = vertexPosition;
temp.x = 0.0;
temp.y = 0.0;
temp.z = 0.0; // Can set any 2 dimensions (e.g. x and y or y and z)
// to zero, but not all three or the shader crashes.
gl_Position = modelViewProjectionMatrix * vec4(temp.xyz, 1.0);
}
Maybe it's because you declare vertexPosition as a vec4, but you are only passing 3 floats per vertex in your C code? I think the part about your temp vector is a red herring.

Resources