WebGL TRIANGLE vs TRIANGLE_STRIP - webgl

I've got a single triangle rendering using gl.TRIANGLE_STRIP, but when I try to change it to gl.TRIANGLE, the faces do not render. It appears like the vertices are rendering as really tiny dots, but the faces are empty.
My understanding is that the vertex format for a TRIANGLE vs TRIANGLE_STRIP should be identical for a single triangle.
// vertex setup
const buffer = gl.createBuffer();
const vertices = new Float32Array([
1, -1, -1, 1, 1.3, 1.5, 1,
1, -1, 1, 1.3, 1, 1.5, 1,
0, 1, 0, 1, 1, 1.75, 1
]);
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
const length = vertices.length / 7;
const mode = gl.TRIANGLE_STRIP;
return {buffer, length, mode};
That works as expected with the following render code:
// render frame
gl.bindBuffer(gl.ARRAY_BUFFER, shape.buffer);
gl.vertexAttribPointer(attribs.position, 3, gl.FLOAT, false, 28, 0);
gl.vertexAttribPointer(attribs.color, 4, gl.FLOAT, false, 28, 12);
gl.enableVertexAttribArray(attribs.position);
gl.enableVertexAttribArray(attribs.color);
gl.useProgram(programs.colored);
gl.uniformMatrix4fv(uniforms.projection, false, projection);
gl.uniformMatrix4fv(uniforms.modelView, false, modelView);
gl.drawArrays(shape.mode, 0, shape.length);
But if I change the mode to gl.TRIANGLE, no faces appear, with the vertices just barely visible as tiny dots.
What am I misunderstanding here?

Related

How to change where rendered object is placed on the screen, OpenGL Es 2.0 iOS

I downloaded a project that displays a square with some texture. The square is currently located near the bottom of the screen, and I want to move it to the middle. Here is the code.
Geometry.h
#import "GLKit/GLKit.h"
#ifndef Geometry_h
#define Geometry_h
typedef struct {
float Position[3];
float Color[4];
float TexCoord[2];
float Normal[3];
} Vertex
typedef struct{
int x;
int y;
int z;
} Position;
extern const Vertex VerticesCube[24];
extern const GLubyte IndicesTrianglesCube[36];
#endif
Here is the code in geometry.m
const Vertex VerticesCube[] = {
// Front
{{1, -1, 1}, {1, 0, 0, 1}, {0, 1}, {0, 0, 1}},
{{1, 1, 1}, {0, 1, 0, 1}, {0, 2.0/3.0}, {0, 0, 1}},
{{-1, 1, 1}, {0, 0, 1, 1}, {1.0/3.0, 2.0/3.0}, {0, 0, 1}},
{{-1, -1, 1}, {0, 0, 0, 1}, {1.0/3.0, 1}, {0, 0, 1}},
};
const GLubyte IndicesTrianglesCube[] =
{
// Front
0, 1, 2,
2, 3, 0,
}
What part of this code determines the position of the rendered object on the screen?
None of the code you posted has to do with screen position.
VerticesCube specifies the cube corners in an arbitrary 3D space. Somewhere in code you haven't posted, a projection transform (and probably also view and model transforms) map each vertex to clip space, and a glViewport call (which is probably implicitly done for you if you're using GLKView) maps clip space to screen/view coordinates.
Rearranging things on screen could involve any one of those transforms, and which one to use is a choice that depends on understanding each one and how it fits into the larger context of your app design.
This is the sort of thing you'd get from the early stages of any OpenGL tutorial. Here's a decent one.

Drawing Multiple identical pyramids in opengl ES 2.0

I want to draw as many pyramids to fill up the space. I can draw a single pyramid, change its color etc. But, now, I want to draw a lot of pyramids that can fill the screen. I want to use a single set of vertices and indices.
The vertex and index with color information are as follows :
const Vertex Vertices [] = {
{{-1, -1, -1}, {1, 0, 0, 1}},
{{1, -1, -1}, {1, 0, 0, 1}},
{{1, -1, 1}, {1, 0, 0, 1}},
{{-1, -1, 1}, {1, 0, 0, 1}},
{{0, 1, 0}, {1, 0, 0, 1}}
};
const GLubyte Indices[] = {
2, 4, 3,
1, 4, 2,
0, 4, 1,
4, 0, 3
};
Can anybody help me with the code as, I know I am doing some mistakes.
In OpenGL ES 2.0 the only way you can do this is simply by re-rendering the pyramid in the different positions on the screen. What you're getting at is called 'instancing' and it's only supported in OpenGL ES 3.0, where you would have as you said one set of vertices and indices but you would issue a gl command to draw many instances of them, all the while in the fragment shader you would have a built-in variable "gl_InstanceID" that you would use to tell what current instance you're on.
There are some vendor-specific extensions that allow you to do instancing in OpenGL ES 2.0, such as NV_draw_instanced, but again that would only work for specific vendors

OpenCV Error: Bad argument <Unknown array type> in unknown function, file ..\..\..\modules\core\src\matrix.cpp, line 697

I'm currently trying to rectify stereo cameras to create a disparity map. Unfortunately, I'm having trouble getting past the stereo rectification step because I keep receiving the error
"OpenCV Error: Bad argument in unknown function, file ..\..\..\modules\core\src\matrix.cpp, line 697."
The process is complicated by the fact that I'm not the one one who calibrated the cameras, nor do I have access to the cameras used to record the videos. I was given all of the calibration parameters (intrinsics, distortion coefficients, rotation matrix, and translation vector). As you can see, I've tried to turn these directly into CvMats and use them that way, but I get an error when I try to actually use them.
Thanks in advance.
CvMat li, lm, ri, rm, r, t, Rl, Rr, Pl, Pr;
double init_li[3][3] =
{ {477.984984743, 0, 316.17458671},
{0, 476.861945645, 253.45073026},
{0, 0 ,1} };
double init_lm[5] = {-0.117798518453, 0.147554949385, -0.0549082041898, 0, 0};
double init_ri[3][3] =
{{478.640315323, 0, 299.957994781},
{0, 477.898896505, 251.665771947},
{0, 0, 1}};
double init_rm[5] = {-0.10884732532, 0.12118405303, -0.0322073237741, 0, 0};
double init_r[3][3] =
{{0.999973709051976, 0.00129700728791757, -0.00713435189275776},
{-0.00132096594266573, 0.999993501087837, -0.00335452397041856},
{0.00712995468519435, 0.00336386001267643, 0.99996892361313}};
double init_t[3] = {-0.0830973040641153, -0.00062704210860633, 1.4287643345188e-005};
cvInitMatHeader(&li, 3, 3, CV_64FC1, init_li);
cvInitMatHeader(&lm, 5, 1, CV_64FC1, init_lm);
cvInitMatHeader(&ri, 3, 3, CV_64FC1, init_ri);
cvInitMatHeader(&rm, 5, 1, CV_64FC1, init_rm);
cvInitMatHeader(&r, 3, 3, CV_64FC1, init_r);
cvInitMatHeader(&t, 3, 1, CV_64FC1, init_t);
cvInitMatHeader(&Rl, 3,3, CV_64FC1);
cvInitMatHeader(&Rr, 3,3, CV_64FC1);
cvInitMatHeader(&Pl, 3,4, CV_64FC1);
cvInitMatHeader(&Pr, 3,4, CV_64FC1);
//frame is a cv::MAT holding the first frame of the video.
CvSize imageSize = frame.size();
imageSize.width /= 2;
//IT BREAKS HERE
cvStereoRectify(&li, &ri, &lm, &rm, imageSize, &r, &t, &Rl, &Rr, &Pl, &Pr);
so, you've been bitten by the c-api ? why don't you just turn your back on it ?
use the c++ api whenever possible, don't start learning opencv with the old(1.0), deprecated api, please !
double init_li[9] =
{ 477.984984743, 0, 316.17458671,
0, 476.861945645, 253.45073026,
0, 0 ,1 };
double init_lm[5] = {-0.117798518453, 0.147554949385, -0.0549082041898, 0, 0};
double init_ri[9] =
{ 478.640315323, 0, 299.957994781,
0, 477.898896505, 251.665771947,
0, 0, 1};
double init_rm[5] = {-0.10884732532, 0.12118405303, -0.0322073237741, 0, 0};
double init_r[9] =
{ 0.999973709051976, 0.00129700728791757, -0.00713435189275776,
-0.00132096594266573, 0.999993501087837, -0.00335452397041856,
0.00712995468519435, 0.00336386001267643, 0.99996892361313};
double init_t[3] = {-0.0830973040641153, -0.00062704210860633, 1.4287643345188e-005};
cv::Mat li(3, 3, CV_64FC1, init_li);
cv::Mat lm(5, 1, CV_64FC1, init_lm);
cv::Mat ri(3, 3, CV_64FC1, init_ri);
cv::Mat rm(5, 1, CV_64FC1, init_rm);
cv::Mat r, t, Rl, Rr, Pl, Pr; // note: no initialization needed.
//frame is a cv::MAT holding the first frame of the video.
cv::Size imageSize = frame.size();
imageSize.width /= 2;
//IT won't break HERE
cv::stereoRectify(li, ri, lm, rm, imageSize, r, t, Rl, Rr, Pl, Pr);
// no need ever to release or care about anything
Ok, so I figured out the answer. The problem was that I had only initialized headers for Rl, Rr, Pl, and Pr, but no memory was allocated for the data itself. I was able to fix it as follows:
double init_Rl[3][3];
double init_Rr[3][3];
double init_Pl[3][4];
double init_Pr[3][4];
cvInitMatHeader(&Rl, 3,3, CV_64FC1, init_Rl);
cvInitMatHeader(&Rr, 3,3, CV_64FC1, init_Rr);
cvInitMatHeader(&Pl, 3,4, CV_64FC1, init_Pl);
cvInitMatHeader(&Pr, 3,4, CV_64FC1, init_Pr);
Although, I have a theory that I might have been able to use cv::stereoRectify with cv::Mats as parameters, which would have made life much easier. I don't know if cv::stereoRectify exists, but it seems that versions of many of the other c functions are in the cv namespace. In case it's hard to tell, I'm very new to OpenCV.

Kalman filters with four input parameters

I have been studying the operation of the Kalman filter for a couple of days now to improve the performance of my face detection program. From the information I have gathered I have put together a code. The code for the Kalman filter part is as follows.
int Kalman(int X,int faceWidth,int Y,int faceHeight, IplImage *img1){
CvRandState rng;
const float T = 0.1;
// Initialize Kalman filter object, window, number generator, etc
cvRandInit( &rng, 0, 1, -1, CV_RAND_UNI );
//IplImage* img = cvCreateImage( cvSize(500,500), 8, 3 );
CvKalman* kalman = cvCreateKalman( 4, 4, 0 );
// Initializing with random guesses
// state x_k
CvMat* state = cvCreateMat( 4, 1, CV_32FC1 );
cvRandSetRange( &rng, 0, 0.1, 0 );
rng.disttype = CV_RAND_NORMAL;
cvRand( &rng, state );
// Process noise w_k
CvMat* process_noise = cvCreateMat( 4, 1, CV_32FC1 );
// Measurement z_k
CvMat* measurement = cvCreateMat( 4, 1, CV_32FC1 );
cvZero(measurement);
/* create matrix data */
const float A[] = {
1, 0, T, 0,
0, 1, 0, T,
0, 0, 1, 0,
0, 0, 0, 1
};
const float H[] = {
1, 0, 0, 0,
0, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 0
};
//Didn't use this matrix in the end as it gave an error:'ambiguous call to overloaded function'
/* const float P[] = {
pow(320,2), pow(320,2)/T, 0, 0,
pow(320,2)/T, pow(320,2)/pow(T,2), 0, 0,
0, 0, pow(240,2), pow(240,2)/T,
0, 0, pow(240,2)/T, pow(240,2)/pow(T,2)
}; */
const float Q[] = {
pow(T,3)/3, pow(T,2)/2, 0, 0,
pow(T,2)/2, T, 0, 0,
0, 0, pow(T,3)/3, pow(T,2)/2,
0, 0, pow(T,2)/2, T
};
const float R[] = {
1, 0, 0, 0,
0, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 0
};
//Copy created matrices into kalman structure
memcpy( kalman->transition_matrix->data.fl, A, sizeof(A));
memcpy( kalman->measurement_matrix->data.fl, H, sizeof(H));
memcpy( kalman->process_noise_cov->data.fl, Q, sizeof(Q));
//memcpy( kalman->error_cov_post->data.fl, P, sizeof(P));
memcpy( kalman->measurement_noise_cov->data.fl, R, sizeof(R));
//Initialize other Kalman Filter parameters
//cvSetIdentity( kalman->measurement_matrix, cvRealScalar(1) );
//cvSetIdentity( kalman->process_noise_cov, cvRealScalar(1e-5) );
/*cvSetIdentity( kalman->measurement_noise_cov, cvRealScalar(1e-1) );*/
cvSetIdentity( kalman->error_cov_post, cvRealScalar(1e-5) );
/* choose initial state */
kalman->state_post->data.fl[0]=X;
kalman->state_post->data.fl[1]=faceWidth;
kalman->state_post->data.fl[2]=Y;
kalman->state_post->data.fl[3]=faceHeight;
//cvRand( &rng, kalman->state_post );
/* predict position of point */
const CvMat* prediction=cvKalmanPredict(kalman,0);
//generate measurement (z_k)
cvRandSetRange( &rng, 0, sqrt(kalman->measurement_noise_cov->data.fl[0]), 0 );
cvRand( &rng, measurement );
cvMatMulAdd( kalman->measurement_matrix, state, measurement, measurement );
//Draw rectangles in detected face location
cvRectangle( img1,
cvPoint( kalman->state_post->data.fl[0], kalman->state_post->data.fl[2] ),
cvPoint( kalman->state_post->data.fl[1], kalman->state_post->data.fl[3] ),
CV_RGB( 0, 255, 0 ), 1, 8, 0 );
cvRectangle( img1,
cvPoint( prediction->data.fl[0], prediction->data.fl[2] ),
cvPoint( prediction->data.fl[1], prediction->data.fl[3] ),
CV_RGB( 0, 0, 255 ), 1, 8, 0 );
cvShowImage("Kalman",img1);
//adjust kalman filter state
cvKalmanCorrect(kalman,measurement);
cvMatMulAdd(kalman->transition_matrix, state, process_noise, state);
return 0;
}
In the face detection part(not shown), a box is drawn for the face detected. 'X, Y, faceWidth and faceHeight' are coordinates of the box and the width and the height passed into the Kalman filter. 'img1' is the current frame of a video.
Results:
Although I do get two new rectangles from the 'state_post' and 'prediction' data (as seen in the code), none of them seem to be any more stable than the initial box drawn without the Kalman filter.
Here are my questions:
Are the matrices initialized (transition matrix A, measurement matrix H etc.), correct for this four input case? (eg.4*4 matrices for four inputs?)
Can't we set every matrix to be an identity matrix?
Is the method I followed till plotting the rectangles theoretically correct? I followed the examples in this and the book 'Learning OpenCV' which don't use external inputs.
Any help regarding this would be much appreciated!
H[] should be identity if you measure directly from the image. If you have 4 measurements and you make 0 some values on the diagonal, you are making those expected measurements (x*H) 0, when it is not true. Then the innovation ( z- x*H) on the kalman filter will be high.
R[] should also be diagonal, though the covariance of the error of measurement can be different from one. If you have normalized coordinates ( the width=height=1), R could be something like 0.01. If you are dealing with pixel coordinates, R=diagonal_ones means an error of one pixel, and that's ok. You can try with 2,3,4, etc...
Your transition matrix A[], which is supposed to propagate the state on each frame, looks like a transition matrix for a state composed of x,y, v_x and v_y. You don't mention velocity in your model, you only talk about measurements. Be careful, do not confuse State (describes position of the face) with Measurements (used to update the state). Your state can be position, velocity and acceleration, and your measurements can be n points in the image. Or the x and y position of the face.
Hope this helps.

webgl adding projection doesnt display object

I am having a look at web gl, and trying to render a cube, but I am having a problem when I try to add projection into the vertex shader. I have added an attribute, but when I use it to multiple the modelview and position, it stops displaying the cube. Im not sure why and was wondering if anyone could help? Ive tried looking at a few examples but just cant get this to work
vertex shader
attribute vec3 aVertexPosition;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
//gl_Position = uMVMatrix * vec4(aVertexPosition, 1.0);
}
fragment shader
#ifdef GL_ES
precision highp float; // Not sure why this is required, need to google it
#endif
uniform vec4 uColor;
void main() {
gl_FragColor = uColor;
}
function init() {
// Get a reference to our drawing surface
canvas = document.getElementById("webglSurface");
gl = canvas.getContext("experimental-webgl");
/** Create our simple program **/
// Get our shaders
var v = document.getElementById("vertexShader").firstChild.nodeValue;
var f = document.getElementById("fragmentShader").firstChild.nodeValue;
// Compile vertex shader
var vs = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vs, v);
gl.compileShader(vs);
// Compile fragment shader
var fs = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fs, f);
gl.compileShader(fs);
// Create program and attach shaders
program = gl.createProgram();
gl.attachShader(program, vs);
gl.attachShader(program, fs);
gl.linkProgram(program);
// Some debug code to check for shader compile errors and log them to console
if (!gl.getShaderParameter(vs, gl.COMPILE_STATUS))
console.log(gl.getShaderInfoLog(vs));
if (!gl.getShaderParameter(fs, gl.COMPILE_STATUS))
console.log(gl.getShaderInfoLog(fs));
if (!gl.getProgramParameter(program, gl.LINK_STATUS))
console.log(gl.getProgramInfoLog(program));
/* Create some simple VBOs*/
// Vertices for a cube
var vertices = new Float32Array([
-0.5, 0.5, 0.5, // 0
-0.5, -0.5, 0.5, // 1
0.5, 0.5, 0.5, // 2
0.5, -0.5, 0.5, // 3
-0.5, 0.5, -0.5, // 4
-0.5, -0.5, -0.5, // 5
-0.5, 0.5, -0.5, // 6
-0.5,-0.5, -0.5 // 7
]);
// Indices of the cube
var indicies = new Int16Array([
0, 1, 2, 1, 2, 3, // front
5, 4, 6, 5, 6, 7, // back
0, 1, 5, 0, 5, 4, // left
2, 3, 6, 6, 3, 7, // right
0, 4, 2, 4, 2, 6, // top
5, 3, 1, 5, 3, 7 // bottom
]);
// create vertices object on the GPU
vbo = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
// Create indicies object on th GPU
ibo = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, indicies, gl.STATIC_DRAW);
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.enable(gl.DEPTH_TEST);
// Render scene every 33 milliseconds
setInterval(render, 33);
}
var mvMatrix = mat4.create();
var pMatrix = mat4.create();
function render() {
// Set our viewport and clear it before we render
gl.viewport(0, 0, canvas.width, canvas.height);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.useProgram(program);
// Bind appropriate VBOs
gl.bindBuffer(gl.ARRAY_BUFFER, vbo);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, ibo);
// Set the color for the fragment shader
program.uColor = gl.getUniformLocation(program, "uColor");
gl.uniform4fv(program.uColor, [0.3, 0.3, 0.3, 1.0]);
//
// code.google.com/p/glmatrix/wiki/Usage
program.uPMatrix = gl.getUniformLocation(program, "uPMatrix");
program.uMVMatrix = gl.getUniformLocation(program, "uMVMatrix");
mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 1.0, 10.0, pMatrix);
mat4.identity(mvMatrix);
mat4.translate(mvMatrix, [0.0, -0.25, -1.0]);
gl.uniformMatrix4fv(program.uPMatrix, false, pMatrix);
gl.uniformMatrix4fv(program.uMVMatrix, false, mvMatrix);
// Set the position for the vertex shader
program.aVertexPosition = gl.getAttribLocation(program, "aVertexPosition");
gl.enableVertexAttribArray(program.aVertexPosition);
gl.vertexAttribPointer(program.aVertexPosition, 3, gl.FLOAT, false, 3*4, 0); // position
// Render the Object
gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED_SHORT, 0);
}
Thanks in advance for any help
Problem is here:
..., gl.viewportWidth / gl.viewportHeight, ...
Both gl.viewportWidth and gl.viewportHeight are undefined values.
I think you missed this two lines:
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
You will see a lot of people doing this:
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
But please note that WebGL context also have this two attributes:
gl.drawingBufferWidth
gl.drawingBufferHeight
So your cube shows up without the perspective matrix, correct?
At first glance I would think that you may be clipping away your geometry with the near plane. You provide a near an far plane to the perpective function as 1.0 and 10.0 respectively. This means that for any fragments to be visible they must fall in the z range of [1, 10]. You cube is 1 unit per side, centered on (0, 0, 0), and you are moving it "back" from the camera 1 unit. This means that the nearest face to the camera will actually be at 0.5 Z, which is outside the clipping range and therefore discarded. About half of your cube WILL be at z > 1, but you'll be looking at the inside of the cube at that point. If you have backface culling turned on you won't see anything.
Long story short - Your cube is probably too close to the camera. Try this instead:
mat4.translate(mvMatrix, [0.0, -0.25, -3.0]);

Resources