How does rotations work in the webgl graphics pipeline? - webgl

I have a implemented this tutorial https://webglfundamentals.org/webgl/lessons/webgl-2d-rotation.html but there's some things I don't understand.
In the shader the rotations is applied by creating a new vec2 rotatedPosition:
...
vec2 rotatedPosition = vec2(
a_position.x * u_rotation.y + a_position.y * u_rotation.x,
a_position.y * u_rotation.y - a_position.x * u_rotation.x);
...
but how exactly is this actually providing a rotation? With rotation=[1,0]->u_rotation=vec2(1,0) the object is rotated 90 degrees. I understand the unit circle maths, what I don't understand is how the simple equation above actually performs the transformation.
a_position is a vec2, u_rotation is a vec2. If i do the calculation above outside the shader and feed it into the shader as a position, that position simply becomes vec2 rotatedPosition = vec2(y, -x). But inside the shader then this calculation vec2 rotatedPosition = vec2( a_position.x * u_rota... performs a rotation (it does not become vec2(y, -x) but instead uses a rotation matrix).
What magic ensures that rotatedPostion actually gets rotated when the vec2 is calculated inside the shader, as opposed to outside the shader? What 'tells' the vertex shader that it's supposed to do a rotation matrix calculation, as opposed to normal arithmetic?

Check the 2D rotation in Wikipedia. The magic (linear algebra) is that your u_rotation vector probably has the cos and sin from the angle θ in radians.
The rotation is counterclockwise of a two-dimensional Cartesian coordinate system. Example:
// your point
a = (a.x, a.y) = (1, 0)
// your angle in radians
θ = PI/2
// intermediaries
cos(θ) = 0
sin(θ) = 1
// your rotation vector
r = (r.x, r.y) = (cos(θ), sin(θ)) = (0, 1)
// your new `v` vector, the rotation of `a` around
// the center of the coordinate system, with angle `θ`
// be careful, 'v' to be a new vector, because if you try
// to reuse 'a' or 'r' you will mess the math
v.x = a.x * r.x - a.y * r.y = 1 * 0 - 0 * 1 = 0
v.y = a.x * r.y + a.y * r.x = 1 * 1 + 0 * 0 = 1
Here v will be x=0, y=1 as it should be. Your code does not seem to do the same.
Maybe you also want to know, how to rotate the point around an arbitrary other point, not always around the center of the coordinate system. You have to translate your point relatively to the new rotation center, rotate it, then translate it back like this:
// your arbitrary point to rotate around
center = (10, 10)
// translate (be careful not to subtract `a` from `center`)
a = a - center;
// rotate as before
...
// translate back
a = a + center;

Related

Metal shader determine point inside a convex quadrilateral

Is there a builtin way in Metal shading language to determine if a point lies inside a convex quadrilateral (or convex polygon in general)? If not, what is the quickest way to determine the same?
I have not been able to find a metal function that meets your needs. I will propose what I believe to be a relatively fast solution (although please feel free to critique or improve it). Note that I have assumed you are working in 2D (or at least a 2D frame for a polygon whose vertices are coplanar)
constant constexpr float M_PI = 3.14159265358979323846264338327950288;
constant constexpr float2 iHat = float2(1, 0);
namespace metal {
// The sawtooth function
METAL_FUNC float sawtooth(float f) { return f - floor(f); }
/// A polygon with `s` sides oriented with `transform` that converts points from the system within which the polygon resides.
/// The frame "attached" to the polygon has an X axis passing through a vertex of the polygon. `circR` refers to the radius
/// of the circumscribed circle that passes through each of the verticies
struct polygon {
const uint s;
const float circR;
const float3x3 transform;
// Constructor
polygon(uint s, float circR, float3x3 transform) : s(s), circR(circR), transform(transform) {}
// `pt` is assumed to be a point in the parent system. `conatins` excludes the set of points along the edges of the polygon
bool contains(float2 pt);
};
}
bool metal::polygon::contains(float2 pt) {
// The position in the frame of the polygon
float2 poly_pt = (transform * float3(pt, 1)).xy;
// Using the law of sines, we can determine the distance that is allowed (see below)
float sqDist = distance_squared(0, poly_pt);
// Outside circle that circumscibes the polygon
if (sqDist > circR * circR) return false;
// Calculate the angle the point makes with the x axis in the frame of the polygon.
// The wedgeAngle is the angle that is formed between two verticies connected by an edge
float wedgeAngle = 2 * M_PI / s;
float ptAngle = dot(poly_pt, iHat);
float deltaTheta = sawtooth(ptAngle / wedgeAngle) * wedgeAngle;
// Calculate the maximum distance squared at this angle that is allowed at this angle relative to
// line-segment joining the `floor(ptAngle / wedgeAngle)`th (kth) vertex with the center of the polygon.
// This is done by viewing the polygon from a frame whose X-axis is the line from the center of the polygon
/// to the kth vertex. Draw line segment L1 from the kth vertex to the (k+1)th vertex and mark its endpoints K and L respectively.
/// Draw line segment L2 from the center of the polygon to the point under consideration and mark L2's intersection with L1
/// as "A". If the center of the triangle is "O", then triangle "OKL" is isosceles with vertex angle `wedgeAngle` and
/// base angle B = M_PI / 2 - wedgeAngle / 2 (since 2B + wedge = M_PI). Triangle "OAK" contains `deltaTheta` and B.
/// Thus, the third angle is M_PI - B - deltaTheta. `maxR` results from the law of sines with this third angle and the
/// base angle B' contained within triangle "OAK".
float maxR = circR * sin(M_PI / 2 - wedgeAngle / 2) / sin(M_PI / 2 + wedgeAngle / 2 - deltaTheta);
return sqDist < maxR * maxR;
}
Note that I opted for a constexpr value in lieu of a macro declaration. Either would do.

How Are Orthographic Coordinates Normalized?

WebGL draws coordinates that vary from -1 to 1. These coordinates become normalized by dividing by w -- the perspective divide. How does this happen with an orthographic projection because the orthographic projection matrix is the identity matrix. That is w will remain 1. How are the coordinates then normalized from [-1,1] with an orthographic projection?
What do you mean by "normalized"?
WebGL doesn't care what your matrices are it just cares what you set gl_Position to.
A typical orthographic matrix just scales and translates x and y (and z) and sets w to 1.
The formula for how what you set gl_Position to gets converted to a pixel is something like
var x = gl_Position.x / gl.Position.w;
var y = gl_Position.y / gl.Position.w;
// convert from -1 <-> 1 to 0 to 1
var zeroToOneX = x * 0.5 + 0.5;
var zeroToOneY = y * 0.5 + 0.5;
// convert from 0 <-> 1 to viewportX <-> (viewportX + viewportWidth)
// and do something similar for Y
var pixelX = viewportX + zeroToOneX * viewportWidth;
var pixelY = viewportY + zeroToOneY * viewportHeight;
Where viewportX, viewportY, viewportWidth, and viewportHeight are set with gl.viewport
If you want the exact formula you can look in the spec under rasterization.
Maybe you might find these tutorials helpful.

Using OpenGL ES 2.0 with iOS, how do I draw a cylinder between two points?

I am given two GLKVector3's representing the start and end points of the cylinder. Using these points and the radius, I need to build and render a cylinder. I can build a cylinder with the correct distance between the points but in a fixed direction (currently always in the y (0, 1, 0) up direction). I am not sure what kind of calculations I need to make to get the cylinder on the correct plane between the two points so that a line would run through the two end points. I am thinking there is some sort of calculations I can apply as I create my vertex data with the direction vector, or angle, that will create the cylinder pointing the correct direction. Does anyone have an algorithm, or know of one, that will help?
Are you drawing more than one of these cylinders? Or ever drawing it in a different position? If so, using the algorithm from the awesome article is a not-so-awesome idea. Every time you upload geometry data to the GPU, you incur a performance cost.
A better approach is to calculate the geometry for a single basic cylinder once — say, one with unit radius and height — and stuff that vertex data into a VBO. Then, when you draw, use a model-to-world transformation matrix to scale (independently in radius and length if needed) and rotate the cylinder into place. This way, the only new data that gets sent to the GPU with each draw call is a 4x4 matrix instead of all the vertex data for whatever polycount of cylinder you're drawing.
Check this awesome article; it's dated but after adapting the algorithm, it works like a charm. One tip, OpenGL ES 2.0 only supports triangles so instead of using GL_QUAD_STRIP as the method does, use GL_TRIANGLE_STRIP instead and the result is identical. The site also contains a bunch of other useful information regarding OpenGL geometries.
See code below for solution. Self represents the mesh and contains the vertices, indices, and such.
- (instancetype)initWithOriginRadius:(CGFloat)originRadius
atOriginPoint:(GLKVector3)originPoint
andEndRadius:(CGFloat)endRadius
atEndPoint:(GLKVector3)endPoint
withPrecision:(NSInteger)precision
andColor:(GLKVector4)color
{
self = [super init];
if (self) {
// normal pointing from origin point to end point
GLKVector3 normal = GLKVector3Make(originPoint.x - endPoint.x,
originPoint.y - endPoint.y,
originPoint.z - endPoint.z);
// create two perpendicular vectors - perp and q
GLKVector3 perp = normal;
if (normal.x == 0 && normal.z == 0) {
perp.x += 1;
} else {
perp.y += 1;
}
// cross product
GLKVector3 q = GLKVector3CrossProduct(perp, normal);
perp = GLKVector3CrossProduct(normal, q);
// normalize vectors
perp = GLKVector3Normalize(perp);
q = GLKVector3Normalize(q);
// calculate vertices
CGFloat twoPi = 2 * PI;
NSInteger index = 0;
for (NSInteger i = 0; i < precision + 1; i++) {
CGFloat theta = ((CGFloat) i) / precision * twoPi; // go around circle and get points
// normals
normal.x = cosf(theta) * perp.x + sinf(theta) * q.x;
normal.y = cosf(theta) * perp.y + sinf(theta) * q.y;
normal.z = cosf(theta) * perp.z + sinf(theta) * q.z;
AGLKMeshVertex meshVertex;
AGLKMeshVertexDynamic colorVertex;
// top vertex
meshVertex.position.x = endPoint.x + endRadius * normal.x;
meshVertex.position.y = endPoint.y + endRadius * normal.y;
meshVertex.position.z = endPoint.z + endRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
colorVertex.colors = color;
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
// bottom vertex
meshVertex.position.x = originPoint.x + originRadius * normal.x;
meshVertex.position.y = originPoint.y + originRadius * normal.y;
meshVertex.position.z = originPoint.z + originRadius * normal.z;
meshVertex.normal = normal;
meshVertex.originalColor = color;
// append vertex
[self appendVertex:meshVertex];
// append color vertex
[self appendColorVertex:colorVertex];
// append index
[self appendIndex:index++];
}
// draw command
[self appendCommand:GL_TRIANGLE_STRIP firstIndex:0 numberOfIndices:self.numberOfIndices materialName:#""];
}
return self;
}

Converting angular velocity to quaternion in OpenCV

I need the angular velocity expressed as a quaternion for updating the quaternion every frame with the following expression in OpenCV:
q(k)=q(k-1)*qwt;
My angular velocity is
Mat w; //1x3
I would like to obtain a quaternion form of the angles
Mat qwt; //1x4
I couldn't find information about this, any ideas?
If I understand properly you want to pass from this Axis Angle form to a quaternion.
As shown in the link, first you need to calculate the module of the angular velocity (multiplied by delta(t) between frames), and then apply the formulas.
A sample function for this would be
// w is equal to angular_velocity*time_between_frames
void quatFromAngularVelocity(Mat& qwt, const Mat& w)
{
const float x = w.at<float>(0);
const float y = w.at<float>(1);
const float z = w.at<float>(2);
const float angle = sqrt(x*x + y*y + z*z); // module of angular velocity
if (angle > 0.0) // the formulas from the link
{
qwt.at<float>(0) = x*sin(angle/2.0f)/angle;
qwt.at<float>(1) = y*sin(angle/2.0f)/angle;
qwt.at<float>(2) = z*sin(angle/2.0f)/angle;
qwt.at<float>(3) = cos(angle/2.0f);
} else // to avoid illegal expressions
{
qwt.at<float>(0) = qwt.at<float>(0)=qwt.at<float>(0)=0.0f;
qwt.at<float>(3) = 1.0f;
}
}
Almost every transformation regarding quaternions, 3D space, etc is gathered at this website.
You will find time derivatives for quaternions also.
I find it useful the explanation of the physical meaning of a quaternion, which can be seen as an axis angle where
a = angle of rotation
x,y,z = axis of rotation.
Then the conversion uses:
q = cos(a/2) + i ( x * sin(a/2)) + j (y * sin(a/2)) + k ( z * sin(a/2))
Here is explained thoroughly.
Hope this helped to make it clearer.
One little trick to go with this and get rid of those cos and sin functions. The time derivative of a quaternion q(t) is:
dq(t)/dt = 0.5 * x(t) * q(t)
Where, if the angular velocity is {w0, w1, w2} then x(t) is a quaternion of {0, w0, w1, w2}. See David H Eberly's book section 10.5 for proof

How to move 2 xna sprites away from eachother dynamically?

I have two items, lets call them Obj1 and Obj2... Both have a current position pos1 and pos2.. Moreover they have current velocity vectors speed1 and speed2 ... How can I make sure that if their distances are getting closer (with checking current and NEXT distance), they will move farther away from eachother ?
I have a signed angle function that gives me the signed angle between 2 vectors.. How can I utilize it to check how much should I rotate the speed1 and speed2 to move those sprites from eachother ?
public float signedAngle(Vector2 v1, Vector2 v2)
{
float perpDot = v1.X * v2.Y - v1.Y * v2.X;
return (float)Math.Atan2(perpDot, Vector2.Dot(v1, v2));
}
I check the NEXT and CURRENT distances like that :
float currentDistance = Vector2.Distance(s1.position, s2.position);
Vector2 obj2_nextpos = s2.position + s2.speed + s2.drag;
Vector2 obj1_nextpos = s1.position + s1.speed + s1.drag;
Vector2 s2us = s2.speed;
s2us.Normalize();
Vector2 s1us = s1.speed;
s1us.Normalize();
float nextDistance = Vector2.Distance(obj1_nextpos , obj2_nextpos );
Then depending whether they are getting bigger or smaller I want to move them away (either by increasing their current speed at the same direction or MAKING THEM FURTHER WHICH I FAIL)...
if (nextDistance < currentDistance )
{
float angle = MathHelper.ToRadians(180)- signedAngle(s1us, s2us);
s1.speed += Vector2.Transform(s1us, Matrix.CreateRotationZ(angle)) * esc;
s2.speed += Vector2.Transform(s2us, Matrix.CreateRotationZ(angle)) * esc;
}
Any ideas ?
if objects A and B are getting closer, one of the object components (X or Y) is opposite.
in this case Bx is opposite to Ax, so only have to add Ax to the velocity vector of object B, and Bx to velocity vector of object A
If I understood correctly, this is the situation and you want to obtain the two green vectors.
The red vector is easy to get: redVect = pos1 - pos2. redVect and greenVect2 will point to the same direction, so the only step you have is to scale it so its length will match speed2's one: finalGreenVect2 = greenvect2.Normalize() * speed2.Length (although I'm not actually sure about this formula). greenVect1 = -redVect so finalGreenVect1 = greenVect1.Normalize() * speed1.Length. Then speed1 = finalGreenVect1 and speed2 = finalGreenVect2. This approach will give you instant turn, if you prefer a smooth turn you want to rotate the speed vector by:
angle = signedAngle(speed) + (signedAngle(greenVect) - signedAngle(speed)) * 0.5f;
The o.5f is the rotation speed, adjust it to any value you need. I'm afraid that you have to create a rotation matrix then Transform() the speed vector with this matrix.
Hope this helps ;)

Resources