WebGL draws coordinates that vary from -1 to 1. These coordinates become normalized by dividing by w -- the perspective divide. How does this happen with an orthographic projection because the orthographic projection matrix is the identity matrix. That is w will remain 1. How are the coordinates then normalized from [-1,1] with an orthographic projection?
What do you mean by "normalized"?
WebGL doesn't care what your matrices are it just cares what you set gl_Position to.
A typical orthographic matrix just scales and translates x and y (and z) and sets w to 1.
The formula for how what you set gl_Position to gets converted to a pixel is something like
var x = gl_Position.x / gl.Position.w;
var y = gl_Position.y / gl.Position.w;
// convert from -1 <-> 1 to 0 to 1
var zeroToOneX = x * 0.5 + 0.5;
var zeroToOneY = y * 0.5 + 0.5;
// convert from 0 <-> 1 to viewportX <-> (viewportX + viewportWidth)
// and do something similar for Y
var pixelX = viewportX + zeroToOneX * viewportWidth;
var pixelY = viewportY + zeroToOneY * viewportHeight;
Where viewportX, viewportY, viewportWidth, and viewportHeight are set with gl.viewport
If you want the exact formula you can look in the spec under rasterization.
Maybe you might find these tutorials helpful.
Related
I have a implemented this tutorial https://webglfundamentals.org/webgl/lessons/webgl-2d-rotation.html but there's some things I don't understand.
In the shader the rotations is applied by creating a new vec2 rotatedPosition:
...
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
...
but how exactly is this actually providing a rotation? With rotation=[1,0]->u_rotation=vec2(1,0) the object is rotated 90 degrees. I understand the unit circle maths, what I don't understand is how the simple equation above actually performs the transformation.
a_position is a vec2, u_rotation is a vec2. If i do the calculation above outside the shader and feed it into the shader as a position, that position simply becomes vec2 rotatedPosition = vec2(y, -x). But inside the shader then this calculation vec2 rotatedPosition = vec2( a_position.x * u_rota... performs a rotation (it does not become vec2(y, -x) but instead uses a rotation matrix).
What magic ensures that rotatedPostion actually gets rotated when the vec2 is calculated inside the shader, as opposed to outside the shader? What 'tells' the vertex shader that it's supposed to do a rotation matrix calculation, as opposed to normal arithmetic?
Check the 2D rotation in Wikipedia. The magic (linear algebra) is that your u_rotation vector probably has the cos and sin from the angle θ in radians.
The rotation is counterclockwise of a two-dimensional Cartesian coordinate system. Example:
// your point
a = (a.x, a.y) = (1, 0)
// your angle in radians
θ = PI/2
// intermediaries
cos(θ) = 0
sin(θ) = 1
// your rotation vector
r = (r.x, r.y) = (cos(θ), sin(θ)) = (0, 1)
// your new `v` vector, the rotation of `a` around
// the center of the coordinate system, with angle `θ`
// be careful, 'v' to be a new vector, because if you try
// to reuse 'a' or 'r' you will mess the math
v.x = a.x * r.x - a.y * r.y = 1 * 0 - 0 * 1 = 0
v.y = a.x * r.y + a.y * r.x = 1 * 1 + 0 * 0 = 1
Here v will be x=0, y=1 as it should be. Your code does not seem to do the same.
Maybe you also want to know, how to rotate the point around an arbitrary other point, not always around the center of the coordinate system. You have to translate your point relatively to the new rotation center, rotate it, then translate it back like this:
// your arbitrary point to rotate around
center = (10, 10)
// translate (be careful not to subtract `a` from `center`)
a = a - center;
// rotate as before
...
// translate back
a = a + center;
Our AR device is based on a camera with pretty strong optical zoom. We measure the distortion of this camera using classical camera-calibration tools (checkerboards), both through OpenCV and the GML Camera Calibration tools.
At higher zoom levels (I'll use 249 out of 255 as an example) we measure the following camera parameters at full HD resolution (1920x1080):
fx = 24545.4316
fy = 24628.5469
cx = 924.3162
cy = 440.2694
For the radial and tangential distortion we measured 4 values:
k1 = 5.423406
k2 = -2964.24243
p1 = 0.004201721
p2 = 0.0162647516
We are not sure how to interpret (read: implement) those extremely large values for k1 and k2. Using OpenCV's classic "undistort" operation to rectify the image using these values seems to work well. Unfortunately this is (much) too slow for realtime usage.
The thumbnails below look similar, clicking them will display the full size images where you can spot the difference:
Camera footage
Undistorted using OpenCV
That's why we want to take the opposite aproach: leave the camera footage be distorted and apply a similar distortion to our 3D scene using shaders. Following the OpenCV documentation and this accepted answer in particular, the distorted position for a corner point (0, 0) would be
// To relative coordinates
double x = (point.X - cx) / fx; // -960 / 24545 = -0.03911
double y = (point.Y - cy) / fy; // -540 / 24628 = -0.02193
double r2 = x*x + y*y; // 0.002010
// Radial distortion
// -0.03911 * (1 + 5.423406 * 0.002010 + -2964.24243 * 0.002010 * 0.002010) = -0.039067
double xDistort = x * (1 + k1 * r2 + k2 * r2 * r2);
// -0.02193 * (1 + 5.423406 * 0.002010 + -2964.24243 * 0.002010 * 0.002010) = -0.021906
double yDistort = y * (1 + k1 * r2 + k2 * r2 * r2);
// Tangential distortion
... left out for brevity
// Back to absolute coordinates.
xDistort = xDistort * fx + cx; // -0.039067 * 24545.4316 + 924.3162 = -34.6002 !!!
yDistort = yDistort * fy + cy; // -0.021906 * 24628.5469 + 440.2694 = = -99.2435 !!!
These large pixel displacements (34 and 100 pixels at the upper left corner) seem overly warped and do not correspond with the undistorted image OpenCV generates.
So the specific question is: what is wrong with the way we interpreted the values we measured, and what should the correct code for distortion be?
Given a set of 3D points in camera's perspective corresponding to a planar surface (ground), is there any fast efficient method to find the orientation of the plane regarding the camera's plane? Or is it only possible by running heavier "surface matching" algorithms on the point cloud?
I've tried to use estimateAffine3D and findHomography, but my main limitation is that I don't have the point coordinates on the surface plane - I can only select a set of points from the depth images and thus must work from a set of 3D points in the camera frame.
I've written a simple geometric approach that takes a couple of points and computes vertical and horizontal angles based on depth measurement, but I fear this is both not very robust nor very precise.
EDIT: Following the suggestion by #Micka, I've attempted to fit the points to a 2D plane on the camera's frame, with the following function:
#include <opencv2/opencv.hpp>
//------------------------------------------------------------------------------
/// #brief Fits a set of 3D points to a 2D plane, by solving a system of linear equations of type aX + bY + cZ + d = 0
///
/// #param[in] points The points
///
/// #return 4x1 Mat with plane equations coefficients [a, b, c, d]
///
cv::Mat fitPlane(const std::vector< cv::Point3d >& points) {
// plane equation: aX + bY + cZ + d = 0
// assuming c=-1 -> aX + bY + d = z
cv::Mat xys = cv::Mat::ones(points.size(), 3, CV_64FC1);
cv::Mat zs = cv::Mat::ones(points.size(), 1, CV_64FC1);
// populate left and right hand matrices
for (int idx = 0; idx < points.size(); idx++) {
xys.at< double >(idx, 0) = points[idx].x;
xys.at< double >(idx, 1) = points[idx].y;
zs.at< double >(idx, 0) = points[idx].z;
}
// coeff mat
cv::Mat coeff(3, 1, CV_64FC1);
// problem is now xys * coeff = zs
// solving using SVD should output coeff
cv::SVD svd(xys);
svd.backSubst(zs, coeff);
// alternative approach -> requires mat with 3D coordinates & additional col
// solves xyzs * coeff = 0
// cv::SVD::solveZ(xyzs, coeff); // #note: data type must be double (CV_64FC1)
// check result w/ input coordinates (plane equation should output null or very small values)
double a = coeff.at< double >(0);
double b = coeff.at< double >(1);
double d = coeff.at< double >(2);
for (auto& point : points) {
std::cout << a * point.x + b * point.y + d - point.z << std::endl;
}
return coeff;
}
For simplicity purposes, it is assumed that the camera is properly calibrated and that 3D reconstruction is correct - something I already validated previously and therefore out of the scope of this issue. I use the mouse to select points on a depth/color frame pair, reconstruct the 3D coordinates and pass them into the function above.
I've also tried other approaches beyond cv::SVD::solveZ(), such as inverting xyz with cv::invert(), and with cv::solve(), but it always ended in either ridiculously small values or runtime errors regarding matrix size and/or type.
So I wrote the following function to take a frame, and polar coordinate function and to graph it out by generating the cartesian coordinates within that frame. Here's the code.
func cartesianCoordsForPolarFunc(frame: CGRect, thetaCoefficient:Double, cosScalar:Double, iPrecision:Double, largestScalar:Double) -> Array<CGPoint> {
// Frame: The frame in which to fit this curve.
// thetaCoefficient: The number to scale theta by in the cos.
// cosScalar: The number to multiply the cos by.
// largestScalar: Largest cosScalar used in this frame so that scaling is relative.
// iPrecision: The step for continuity. 0 < iPrecision <= 2.pi. Defaults to 0.1
// Clean inputs
var precision:Double = 0.1 // Default precision
if iPrecision != 0 {// Can't be 0.
precision = iPrecision
}
// This is ther polar function
// var theta: Double = 0 // 0 <= theta <= 2pi
// let r = cosScalar * cos(thetaCoefficient * theta)
var points:Array<CGPoint> = [] // We store the points here
for theta in stride(from: 0, to: Double.pi * 2 , by: precision) { //TODO: Try to recreate continuity. WHY IS IT NOT 2PI
let x = cosScalar * cos(thetaCoefficient * theta) * cos(theta) // Convert to cartesian
let y = cosScalar * cos(thetaCoefficient * theta) * sin(theta) // Convert to cartesian
// newvalue = (max'-min')/(max-min)*(value-max)+max'
let scaled_x = (Double(frame.width) - 0)/(largestScalar*2)*(x-largestScalar)+Double(frame.width) // Scale to the frame
let scaled_y = (Double(frame.height) - 0)/(largestScalar*2)*(y-largestScalar)+Double(frame.height) // Scale to the frame
points.append(CGPoint(x: scaled_x, y:scaled_y)) // Add the result
}
print("Done points")
return points
}
The polar function I'm passing is r = 100*cos(9/4*theta) which looks like this.
I'm wondering why my function returns the following when theta goes from 0 to 2. (Please note I'm in this image I'm drawing different sizes flowers hence the repetition of the pattern)
As you can see it's wrong. Weird thing is that when theta goes from 0 to 2Pi*100 (Also works for other random values such as 2Pi*4, 2Pi*20 but not 2Pi*2 or 2Pi*10)it works and I get this.
Why is this? Is the domain not 0 to 2Pi? I noticed that when going to 2Pi*100 it redraws some petals so there is a limit, but what is it?
PS: Precision here is 0.01 (enough to act like it's continuous). In my images I'm drawing the function in different sizes and overlapping (last image has 2 inner flowers).
No, the domain is not going to be 2π. Set up your code to draw slowly, taking 2 seconds for each 2π, and watch. It makes a whole series of full circles, and each time the local maxima and minima land at different points. That's what your petals are. It looks like your formula repeats after 8π.
It looks like the period is the denominator of the theta coefficient * 2π. Your theta coefficient is 9/4, the denominator is 4, so the coefficient is 4*2π, or 8π.
(That is based on playing in Wolfram Alpha and observing the results. I may be wrong.)
I need the angular velocity expressed as a quaternion for updating the quaternion every frame with the following expression in OpenCV:
q(k)=q(k-1)*qwt;
My angular velocity is
Mat w; //1x3
I would like to obtain a quaternion form of the angles
Mat qwt; //1x4
I couldn't find information about this, any ideas?
If I understand properly you want to pass from this Axis Angle form to a quaternion.
As shown in the link, first you need to calculate the module of the angular velocity (multiplied by delta(t) between frames), and then apply the formulas.
A sample function for this would be
// w is equal to angular_velocity*time_between_frames
void quatFromAngularVelocity(Mat& qwt, const Mat& w)
{
const float x = w.at<float>(0);
const float y = w.at<float>(1);
const float z = w.at<float>(2);
const float angle = sqrt(x*x + y*y + z*z); // module of angular velocity
if (angle > 0.0) // the formulas from the link
{
qwt.at<float>(0) = x*sin(angle/2.0f)/angle;
qwt.at<float>(1) = y*sin(angle/2.0f)/angle;
qwt.at<float>(2) = z*sin(angle/2.0f)/angle;
qwt.at<float>(3) = cos(angle/2.0f);
} else // to avoid illegal expressions
{
qwt.at<float>(0) = qwt.at<float>(0)=qwt.at<float>(0)=0.0f;
qwt.at<float>(3) = 1.0f;
}
}
Almost every transformation regarding quaternions, 3D space, etc is gathered at this website.
You will find time derivatives for quaternions also.
I find it useful the explanation of the physical meaning of a quaternion, which can be seen as an axis angle where
a = angle of rotation
x,y,z = axis of rotation.
Then the conversion uses:
q = cos(a/2) + i ( x * sin(a/2)) + j (y * sin(a/2)) + k ( z * sin(a/2))
Here is explained thoroughly.
Hope this helped to make it clearer.
One little trick to go with this and get rid of those cos and sin functions. The time derivative of a quaternion q(t) is:
dq(t)/dt = 0.5 * x(t) * q(t)
Where, if the angular velocity is {w0, w1, w2} then x(t) is a quaternion of {0, w0, w1, w2}. See David H Eberly's book section 10.5 for proof