Quaternions, rotate a model and align with a direction - xna

Suppose you have quaternion that describes the rotation of a 3D Model.
What I want to do is, given an Object (with rotationQuaternion, side vector...), I want to align it to a target point.
For a spaceship, I want the cockpit to point to a target.
Here is some code I have ... It's not doing what I want and I don't know why...
if (_target._ray.Position != _obj._ray.Position)
{
Vector3 vec = Vector3.Normalize(_target._ray.Position - _obj._ray.Position);
float angle = (float)Math.Acos(Vector3.Dot(vec, _obj._ray.Direction));
Vector3 cross = Vector3.Cross(vec, _obj._ray.Direction);
if (cross == Vector3.Zero)
cross = _obj._side;
_obj._rotationQuaternion *= Quaternion.CreateFromAxisAngle(cross,angle);
}
// Updates direction, up, side vectors and model Matrix
_obj.UpdateMatrix();
after some time the rotationQuaternion is filled with almost Zero at X,Y,Z and W
Any help?
Thanks ;-)

This is a shortcut I've used to get the quaternion for lock-on-target rotation:
Matrix rot = Matrix.CreateLookAt(_arrow.Position, _cube.Position, Vector3.Down);
_arrow.Rotation = Quaternion.CreateFromRotationMatrix(rot);
For this example, I'm rendering an arrow and a cube, where the cube is moving around in a circle, and with the above code the arrow is always pointing at the cube. (Though I imagine there are some edge cases when cube is exactly above or below).
Once you get this quaternion (from spaceship to target), you can use Quaternion.Lerp() to interpolate between current ship rotation and the aligned one. This will give your rotation a smooth transition (not just snap to target).
Btw, might be that your rotation gets reduced to zero because you're using *= when assigning to it.

Your code's a bit funky.
if (_target._ray.Position != _obj._ray.Position)
{
This may or may not be correct. Clearly, you've overridden the equals comparator. The correct thing be be doing here would be to ensure that the dot-product between the two (unit-length) rays is close to 1. If the rays have the same origin, then presumably have equal 'positions' means they're the same.
Vector3 vec = Vector3.Normalize(_target._ray.Position - _obj._ray.Position);
This seems particularly wrong. Unless the minus operator has been overridden in a strange way, subtracting this way doesn't make sense.
Here's pseudocode for what I recommend:
normalize3(targetRay);
normalize3(objectRay);
angleDif = acos(dotProduct(targetRay,objectRay));
if (angleDif!=0) {
orthoRay = crossProduct(objectRay,targetRay);
normalize3(orthoRay);
deltaQ = quaternionFromAxisAngle(orthoRay,angleDif);
rotationQuaternion = deltaQ*rotationQuaternion;
normalize4(rotationQuaternion);
}
Two things to note here:
Quaternions are not commutative. I've assumed that your quaternions are rotating column vectors; so I put deltaQ on the left. It's not clear what your *= operator is doing.
It's important to regularly normalize your quaternions after multiplication. Otherwise small errors accumulate and they drift away from unit length causing all manner of grief.

OMG! It worked!!!
Vector3 targetRay = Vector3.Normalize(_target._ray.Position - _obj._ray.Position);
Vector3 objectRay = Vector3.Normalize(_obj._ray.Direction);
float angle = (float)Math.Acos(Vector3.Dot(targetRay, objectRay));
if (angle!=0)
{
Vector3 ortho = Vector3.Normalize(Vector3.Cross(objectRay, targetRay));
_obj._rotationQuaternion = Quaternion.CreateFromAxisAngle(ortho, angle) * _obj._rotationQuaternion;
_obj._rotationQuaternion.Normalize();
}
_obj.UpdateMatrix();
Thank you very much JCooper!!!
And niko I like the idea of Lerp ;-)

Related

Trouble implementing shadows in WebGL

I am trying to implement shadows into my WEBGL 2.0 Project using this tutorial
https://webgl2fundamentals.org/webgl/lessons/webgl-shadows.html
Currently I am getting really bad results like this:
Basically a ton of the terrain is being drawn in shadow that shouldn't be. The light projection is from your camera towards the direction you are looking so hypothetically you shouldn't be able to see any shdaows becuase the light projection is the same as your camera ( I am just doing this for testing until I can get this working properly)
I have everything the same as the tutorial I believe except I am using glMatrix instead of their matrix math library (shouldn't matter I would assume). Here's the thing though. I don't use a model view matrix for anything I am rendering so none of my points are on a -1,1 range. They can go out as far as -3200...ect Its just all one big terrain mesh chunked out.
I think the issue lies with how I am creating the texture matrix
textureMatrix = glMatrix.mat4.create();
glMatrix.mat4.translate(textureMatrix,textureMatrix,[0.5,0.5,0.5]);
glMatrix.mat4.scale(textureMatrix,textureMatrix,[0.5,0.5,0.5]);
glMatrix.mat4.multiply(textureMatrix,textureMatrix, projectionMatrix);
glMatrix.mat4.invert(lightMatrix,lightMatrix);
glMatrix.mat4.multiply(textureMatrix,textureMatrix, lightMatrix);
I am using the same matrix for the light projection as your normal projection, is that an issue? if anyone could help it would be greatly appreciated.
That's probably because the Y position of your light (in your example, it is much more the distance between the eye and the scene) is too big for the Z size of your shadow volume (the size of your shadow volume in the view direction.) Here if posY is inside the wireframe box :
But if you increase posY too much (i.e. your shapes get out of the shadow volume, they disappear
So you should increase the size of your shadow volume (or shrinken your scene, either way.) You cannot simulate that with the slider because they just give you the control to the two dimensions X and Y dimensions : projWidth and projHeight.
i.e. in the last code in your tutorial page, the latest parameter ("far") for example change it from 10 to 100
const lightProjectionMatrix = settings.perspective
? m4.perspective(
degToRad(settings.fieldOfView),
settings.projWidth / settings.projHeight,
0.5, // near
10) // far
: m4.orthographic(
-settings.projWidth / 2, // left
settings.projWidth / 2, // right
-settings.projHeight / 2, // bottom
settings.projHeight / 2, // top
0.5, // near
100); // far
Then you can increase posY far more :
without having your full code, it is hard to reproduce and help you. Could you not try to just inject your scene into the tutorial code ? You can bind the viewpoint with the source and orientation of the light by using the same inputs : (just adding 0.5 to X to see a bit of shadow and make sure it is properly computed.)
/*const cameraPosition = [settings.cameraX, settings.cameraY, 15];*/
const cameraPosition = [settings.posX+0.5, settings.posY, settings.posZ];
/*const target = [0, 0, 0]; */
const target = [settings.targetX, settings.targetY, settings.targetZ];

Pixel-perfect collisions in Monogame, with float positions

I want to detect pixel-perfect collisions between 2 sprites.
I use the following function which I have found online, but makes total sense to me.
static bool PerPixelCollision(Sprite a, Sprite b)
{
// Get Color data of each Texture
Color[] bitsA = new Color[a.Width * a.Height];
a.Texture.GetData(0, a.CurrentFrameRectangle, bitsA, 0, a.Width * a.Height);
Color[] bitsB = new Color[b.Width * b.Height];
b.Texture.GetData(0, b.CurrentFrameRectangle, bitsB, 0, b.Width * b.Height);
// Calculate the intersecting rectangle
int x1 = (int)Math.Floor(Math.Max(a.Bounds.X, b.Bounds.X));
int x2 = (int)Math.Floor(Math.Min(a.Bounds.X + a.Bounds.Width, b.Bounds.X + b.Bounds.Width));
int y1 = (int)Math.Floor(Math.Max(a.Bounds.Y, b.Bounds.Y));
int y2 = (int)Math.Floor(Math.Min(a.Bounds.Y + a.Bounds.Height, b.Bounds.Y + b.Bounds.Height));
// For each single pixel in the intersecting rectangle
for (int y = y1; y < y2; ++y)
{
for (int x = x1; x < x2; ++x)
{
// Get the color from each texture
Color colorA = bitsA[(x - (int)Math.Floor(a.Bounds.X)) + (y - (int)Math.Floor(a.Bounds.Y)) * a.Texture.Width];
Color colorB = bitsB[(x - (int)Math.Floor(b.Bounds.X)) + (y - (int)Math.Floor(b.Bounds.Y)) * b.Texture.Width];
if (colorA.A != 0 && colorB.A != 0) // If both colors are not transparent (the alpha channel is not 0), then there is a collision
{
return true;
}
}
}
//If no collision occurred by now, we're clear.
return false;
}
(all the Math.floor are useless, I copied this function from my current code where I'm trying to make it work with floats).
It reads the color of the sprites in the rectangle portion that is common to both sprites.
This actually works fine, when I display the sprites at x/y coordinates where x and y are int's (.Bounds.X and .Bounds.Y):
View an example
The problem with displaying sprites at int's coordinates is that it results in a very jaggy movement in diagonals:
View an example
So ultimately I would like to not cast the sprite position to int's when drawing them, which results in a smooth(er) movement:
View an example
The issue is that the PerPixelCollision works with ints, not floats, so that's why I added all those Math.Floor. As is, it works in most cases, but it's missing one line and one row of checking on the bottom and right (I think) of the common Rectangle because of the rounding induced by Math.Floor:
View an example
When I think about it, I think it makes sense. If x1 is 80 and x2 would actually be 81.5 but is 81 because of the cast, then the loop will only work for x = 80, and therefore miss the last column (in the example gif, the fixed sprite has a transparent column on the left of the visible pixels).
The issue is that no matter how hard I think about this, or no matter what I try (I have tried a lot of things) - I cannot make this work properly. I am almost convinced that x2 and y2 should have Math.Ceiling instead of Math.Floor, so as to "include" the last pixel that otherwise is left out, but then it always gets me an index out of the bitsA or bitsB arrays.
Would anyone be able to adjust this function so that it works when Bounds.X and Bounds.Y are floats?
PS - could the issue possibly come from BoxingViewportAdapter? I am using this (from MonoExtended) to "upscale" my game which is actually 144p.
Remember, there is no such thing as a fractional pixel. For movement purposes, it completely makes sense to use floats for the values and cast them to integer pixels when drawn. The problem is not in the fractional values, but in the way that they are drawn.
The main reason the collisions are not appearing to work correctly is the scaling. The colors for the new pixels in between the diagonals get their colors by averaging* the surrounding pixels. The effect makes the image appear larger than the original, especially on the diagonals.
*there are several methods that may be used for the scaling, bi-cubic and linear are the most common.
The only direct(pixel perfect) solution is to compare the actual output after scaling. This requires rendering the entire screen twice, and requires the scale factor more computations. (not recommended)
Since you are comparing the non-scaled images your collisions appear to be off.
The other issue is movement speed. If you are moving faster than one pixel per Update(), detecting per pixel collisions is not enough, if the movement is to be restricted by the obstacle. You must resolve the collision.
For enemies or environmental hazards your original code is sufficient and collision resolution is not required. It will give the player a minor advantage.
A simple resolution algorithm(see below for a mathematical solution) is to unwind the movement by half, check for collision. If it is still colliding, unwind the movement by a quarter, otherwise advance it by a quarter and check for collision. Repeat until the movement is less than 1 pixel. This runs log of Speed times.
As for the top wall not colliding perfectly: If the starting Y value is not a multiple of the vertical movement speed, you will not land perfectly on zero. I prefer to resolve this by setting the Y = 0, when Y is negative. It is the same for X, and also when X and Y > screen bounds - origin, for the bottom and right of the screen.
I prefer to use mathematical solutions for collision resolution. In your example images, you show a box colliding with a diamond, the diamond shape is represented mathematically as the Manhattan distance(Math.Abs(x1-x2) + Math.Abs(y1-y2)). From this fact, it is easy directly calculate the resolution to the collision.
On optimizations:
Be sure to check that the bounding Rectangles are overlapping before calling this method.
As you have stated, remove all Math.Floors, since, the cast is sufficient. Reduce all calculations inside of the loops not dependent on the loop variable outside of the loop.
The (int)a.Bounds.Y * a.Texture.Width and (int)b.Bounds.Y * b.Texture.Width are not dependent on the x or y variables and should be calculated and stored before the loops. The subtractions 'y-[above variable]` should be stored in the "y" loop.
I would recommend using a bitboard(1 bit per 8 by 8 square) for collisions. It reduces the broad(8x8) collision checks to O(1). For a resolution of 144x144, the entire search space becomes 18x18.
you can wrap your sprite with a rectangle and use its function called Intersect,which detedct collistions.
Intersect - XNA

Gravity not working correctly

My code for physics in my game is this:
-- dt = time after each update
self.x = math.floor(self.x + math.sin(self.angle)*self.speed*dt)
self.y = math.floor(self.y - math.cos(self.angle)*self.speed*dt)
-- addVector(speed,angle,speed2,angle2)
self.speed,self.angle = addVector(self.speed,self.angle,g,math.pi)`
when it hits the ground, the code for it to bounce is :
self.angle = math.pi - self.angle
self.y = other.y - self.r`
the function addVector is defined here:
x = math.sin(angle)*speed + math.sin(angle2)*speed2
y = math.cos(angle)*speed + math.cos(angle2)*speed2
v = math.sqrt(x^2 + y^2)
a = math.pi/2 - math.atan(y,x)
return v,a
but when I place a single ball in the simulation without any drag or elasticity, the height of the ball after each bounce keeps getting higher. Any idea what may be causing the problem?
Edit: self.r is the radius, self.x and self.y are the position of the centre of the ball.
Your Y axis is increasing downward and decreasing upward.
Making self.y = math.floor(..) moves your ball a bit upward every frame.
The solution is to store your coordinates with maximal precision.
You could make new variable y_for_drawing = math.floor(y) to draw the ball at pixel with integer coordinates, but your main y value must have fractional part.
I’ve managed to get your code to run and reproduce the behavior you are seeing. I also find it difficult to figure out the issue. Here’s why: movement physics involves position, which is affected by a velocity vector, which in turn is affected by an acceleration vector. In your code these are all there, but are in no way clearly separated. There are trig functions and floor functions all interacting in a way that makes it difficult to see what role they are playing in the final position calculation.
By far the best and easiest-to-understand tutorial to help you implement basic physics lime this is The Nature of Code (free to read online, with interactive examples). As an exercise I ported most of the early exercises into Lua. I would suggest you see how he clearly separates the position, velocity and acceleration calculations.
As an experiemnt, increase g to a much higher number. When I did that, I noticed the ball would eventually settle to the ground, but of course the bouces were too fast and it didnt bounce in a way that seems natural.
Also, define other.y - it doesnt seem to affect the bouncing issue, but just to be clear on what that is.

Work out world coordinates relative to position and rotation of an object in lua

I am trying to make a new tool for the tabletop simulator community based on my "pack up bag". The Packup Bag is a tool that remembers world position and rotation of objects you place inside it, so you can then place them back in the same positions and rotations they came from when "unpacking the bag".
I have been trying to modify this so it spits things out in a relative position and rotation to the bag, instead of using hardcoded world coordinates. The idea here is that players can sit at any location at the table, pick the faction bag they wish to play.. drop it on a known spot marked for them and press the place and it will populate contents of the bag relative to its location.
Now I have gotten some of this worked out... I am able to get the bag to place relative in some ways .. but I am finding it beyond my maths skills to work out the modifications of the transforms.
Basically I have this part working..
The mod understands relative position to the bag
The mod understands relative rotation to the bag
BUT.. the mod dose not understand relative position AND rotation at the same time.... I need someway to modify the position data relative to the rotational data... but can not work out how.
See this video....
https://screencast-o-matic.com/watch/cFiOeYFsyi
As you can see as I move the bag around the object is placed relative to it.... but if I rotate the bag, the object has the correct rotation but I need math to work out the correct position IF it is rotated. You can see it is just getting placed in the same position it was as if there was no rotation... as I haven't worked out how to code it to do this.
Now I have heard of something called "matrix math" but I couldn't understand it. I'm a self taught programmer of only a few months after I started modding TTS.
You can kinda understand what I mean I hope.. In the video example, when I rotate the bag, the object should be placed with the correct rotation but the world position needs to be changed.
See this Example to see relative rotation ....
https://screencast-o-matic.com/watch/cFiOeZFsyq
My code dose this by remembering the self.getPostion() of the bag and the obj.Position() of the object getting packed up.. it then dose a self - obj and stores that value for the X and Y position. It also remembers if it is negative or position and then when placing it uses the self.postion() and adds or subtracts the adjustment value. Same for rotation.
Still I do not know what ot go from here.. I have been kinda hurting my head on this and thought maybe some of you math guys might have a better idea on how to do this.
: TL;DR :
So I have
bag.getPosition() and obj.getRotation()
bag.getRotation(0 and obj.getRotation()
These return (x,y,z}
What math can I use to find the relative position and rotation of the objects to the bag so if I rotate the bag. The objects come out of it in a relative way...
Preferably in LUA.. thank you!
I'd hope you've found the answer by now, but for anyone else finding this page:
The problem is much simpler than what you're suggesting - it's basic right triangle trigonometry.
Refer to this diagram. You have a right triangle with points A, B, and C, where C is the right angle. (For brevity, I'll use abbreviations opp, adj, and hyp.) The bag is at point A, you want the object at point B. You have the angle and distance (angle A and the length of the hyp, respectively), but you need the x,y coordinates of point B relative to point A.
The x coord is the length of adj, and y coord is the length of opp. As shown, the formulas to calculate these are:
cos(angle A) = adj/hyp
sin(angle A) = opp/hyp
solving for the unknowns:
adj = hyp * cos(angle A)
opp = hyp * sin(angle A)
For your specific use, and taking into account the shift in coordinate system x,y,z => x,z,y:
obj_x_offset = distance * math.cos(bag.getRotation().y)
obj_z_offset = distance * math.sin(bag.getRotation().y)
obj_x_position = bag.getPosition().x + obj_x_offset
obj_z_position = bag.getPosition().z + obj_z_offset
Diagram source:
https://www.khanacademy.org/math/geometry/hs-geo-trig/hs-geo-modeling-with-right-triangles/a/right-triangle-trigonometry-review

How to use this DXF Bulge Arc function getArcDataFromBulge()?

I have a problem to use this bulge arc (dxf parser) function in C++ getArcDataFromBulge().
https://github.com/Embroidermodder/Embroidermodder/blob/master/libembroidery/geom-arc.c
I have my drawArc() function which need 'start angle' and 'sweep angle' parameters from this getArcDataFromBulge() function.
My drawArc() function use OpenGL 2D coordinate system with right side zero angle position and when I get values from getArcDataFromBulge() and recalculate it (0+-, 180+-, 360+-) I have something like unexpected opposite angles as results. It looks like clockwise-counterclockwise problem, but I'm think is not, I'm not sure. Do you have some idea what is going on?
For example:
tempBulge.bulge := 0.70;
arcMidAngle := RadToDeg( atan2(tempBulge.arcMidY - tempBulge.arcCenterY,
tempBulge.arcCenterX - tempBulge.arcMidX) );
After calculaton: arcMidAngle = 179.999
When I add and subtract from this point half of arc chord angle, I get start and end angles of my arc: 90°, 270° but it's not the same arc when I open dxf with some CAD software, it is opposite than origin drawing.
If you have an arc from 0° to 90°, it could be a 1/4 circle or a 3/4 circle.
You need to parse the $ANGDIR and $ANGBASE variables from the HEADER section which tells you in which direction angles are defined ($ANGDIR) and where the 0° angle starts ($ANGBASE) within that specific DXF file:
Variable Group code Description
$ANGBASE 50 Angle 0 direction
$ANGDIR 70 1 = Clockwise angles, 0 = Counterclockwise
For DXF, if $ANGBASE = 0, then 0° is on the right of the center, alike Windows.
Furthermore, in DXF, the positive Y-axis is upwards, in contrast to many Windows API's where the positive Y-axis is downwards.

Resources