LOVE2D: Scale (zoom) to/from point (cursor) - lua

I am trying to describe camera movement in LOVE2D. I followed mostly this code, which is some basic camera movement. I struggle, however, to implement zooming to a point (mouse cursor for example).
Scaling works by multiplying everything by a given factor (for x and y axis) which causes all objects to "slide" to the side. Changing the love.graphics.scale() function is beyond my capabities so as a workaround I tried to offset the slide by camera movement but it didn`t work.
Is there a way to have a zoom to point functionality in LOVE2D?

love.graphics.scale scales from the origin: (0, 0). To scale around the mouse position, you'll want to do a love.graphics.translate with minus the mouse position, before scaling. (Depending on how complicated your stuff is, you might need some other corrections, but I really don't know with this little information you gave us.)

Related

How to determine the rotation angle?

I'm trying to implement a russian roulette game and want it to brute-force the solution for it. Here is my problem. I'm going to hard code the relative angles of the numbers on the wheel (eg. there are 36 numbers and each number would have 10 degree offset to each other, the one on the top, 12 o'clock position, will have the 0 and the next 10 and vice versa). I will rotate the wheel randomly and then determine the rotation of it based on some values that I can calculate (startPosition to finishedPosition). The wheel is an ImageView. Is there a way to actually do this? For example, get the top left x,y pos for its start and end, then by some formula to calculate how much it rotated. Or is there a better way to do this? There is not much of a source code to show it, so this is more like a mathematical question rather than a swift one. Any feedback is much appreciated.
To calculate rotation, you need coordinates of three points: start location sx, sy, end location ex, ey of the same point after rotation and center of rotation cx, cy
Then you can find angle using atan2 function
rot_angle = atan2((ex-cx)*(sx-cx)+(ey-cy)*(sy-cy), (ex-cx)*(sy-cy)-(ey-cy)*(sx-cx))
Note - I used argument order (x,y) from here, while most languages use reverse order (y,x), so check what order you really need (I have no experience in IOS languages). Also result value might be in radians or in degrees (above link doesn't specify it clearly)
Your question doesn't make much sense. If you rotate the wheel randomly, calculate the random value as an angle. If you want to change the previous rotation by some random angle, then do the math on the starting rotation and ending rotation. That is just adding and subtracting angles (modulo 2π). Then you will know how far it is rotated, and not have to calculate it.
Assuming you're talking about a roulette wheel, and not "Russian Roulette" (In American English at least, that term involves pointing a loaded revolver at your head) you'll need to track both the wheel rotation and the ball rotation. To apply the rotation to the wheel, you'll just take the image of the wheel and rotate it on the Z axis around it's x/y center point.
To plot the ball, you'll need to use trig to calculate the center of the ball based on the radius of the track the ball follows and the angle. But again, always track the angle, and then convert the angle to an x/y center point for the ball to plot it. Don't forget the angle and then have to convert back from the ball position to its angle. That's silly.

How is the initial velocity of UISpringTimingParameters specified?

I'm trying to perform a spring animation on a view which is released by the user's pan gesture and may have a non-zero velocity. I'm basically trying to recreate the animation of this WWDC video where they use UISpringTimingParameters(dampingRatio:initialVelocity:). However, the documentation seems to contradict itself:
velocity
The initial velocity and direction of the animation, specified as a unit vector.
[...]
For example, if the total animation distance is 200 points and the view’s initial velocity is 100 points per second, specify a vector with a magnitude of 0.5.
If 0.5 is an example value, then apparently it doesn't need to be a unit vector after all. And it's not possible to encode a velocity in a unit vector in the first place.
Not being able to rely on the documentation, I tried plugging in several different values, but nothing lead to even remotely satisfactory results.
How do I use this API?
Good question.
TL;DR: If you are trying to animate something to a position in 2D, you need to animated each coordinate separately, each with the respective x / y velocity.
If you combine them by taking the scalar projection of the velocity onto your offset, you get a weird artifact where assuming the context of flicking a view around the screen, where the target is the center of the screen, and you are flicking the view up the right side, since the animation is to return to center, and since the combined velocity is going away from the center, the animation can only assume that the subject is moving in a straight line away from the center, and it will jag out to the right, before animating back to the center.
https://github.com/chrisco314/SpringAnimationTest

Determining the angle in which to rotate the robot in respect to another object

I am currently working on a project where I need to determine whether a robot, with an ArUco marker on top of it, needs to rotate to a certain direction in order for it to point, with its front, towards a particular object, for which its centre point is known. So basically, what I've got is the centre point of the ball and the 4 points of the marker corners.
I'm including an example of what I mean as an image.
Note the little arrow drawn on the marker cardboard. It shows the front side of the robot.
Lastly: I have a camera that captures frames, and the program prints out the rotation vector. For some reason, the values are different during every frame, even though I intentionally left the robot at the same position. Could anyone please explain wy that might be?
Thanks a lot.
EDIT: I've got the issue with the rotation vector fluctuating sorted; now I just need to figure out how to use the output of that to get the orientation of the robot, that is, in respect to a ball (of which I have its centre point), which apparently is done through the X-axis.
I'm adding another image, which shows the x-axis as red, the y-axis as blue and the z-axis as green. The vectors are of type cv::Vec3d.
First, some code:
std::vector<cv::Vec3d> rvecs, tvecs;
cv::aruco::estimatePoseSingleMarkers(corners, 0.05, CAMERA_MATRIX, DISTORTION_COEFFICIENTS, rvecs, tvecs);
And the image showing what I mean:

How to I get more reliable Y position tracking for the Google Tango in Unity?

We have a unity scene that uses arealearning which has been extremely reliable and consistent about XZ position. However we are noticing that sometimes the tango delta camera’s Y position will "jump up" very high in the scene. When we force the tango to relocalize (by covering the sensors for a few seconds), the Y position remains very off. At other times, the Y position varies a 0.5 - 1.5 unity units when we first start up our Unity app on the tango and are holding it in the exact same position in the exact same room using the same ADF file. Is there a way to get a more reliable Y position tracking and/or correct for these jumps?
(All the XYZ coordinate is in the Unity convention in this context, x is right, y is up, z is forward)
Y position should work same as XZ coordinates, it relocalized to the height based on the ADF origin.
But note that, the ADF's origin is where you started learning(recording) ADF. Let's say you started the learning session by holding the device normally, then the ADF's origin might be a little bit higher than ground level. When you construct a virtual world to relocalize, you should take the height difference into consideration.
Another thing to check is that making sure there's no offset or original location set for DeltaPoseController prefab. DeltaPoseController will take the initial starting transformation as a offset, and add up pose on it. For example, if my DeltaPoseController's starting position is at (0,1,0), and my pose from device is (0,1,1), then the actually position for DeltaPoseController in Unity would be (0,2,1). This applies to both translation and rotation.
Another advanced (and preferred) way of defining ground level is to use the depth sensor to find out the ground height. In the Unity Augmented Reality example, it showed how to detect the plane and place a marker on it. You can easily apply the similar method to the ground plane, do a PlaneFinding and place the ground at the right height in Unity world space.

What is this rotation behavior in XNA?

I am just starting out in XNA and have a question about rotation. When you multiply a vector by a rotation matrix in XNA, it goes counter-clockwise. This I understand.
However, let me give you an example of what I don't get. Let's say I load a random art asset into the pipeline. I then create some variable to increment every frame by 2 radians when the update method runs(testRot += 0.034906585f). The main thing of my confusion is, the asset rotates clockwise in this screen space. This confuses me as a rotation matrix will rotate a vector counter-clockwise.
One other thing, when I specify where my position vector is, as well as my origin, I understand that I am rotating about the origin. Am I to assume that there are perpendicular axis passing through this asset's origin as well? If so, where does rotation start from? In other words, am I starting rotation from the top of the Y-axis or the x-axis?
The XNA SpriteBatch works in Client Space. Where "up" is Y-, not Y+ (as in Cartesian space, projection space, and what most people usually select for their world space). This makes the rotation appear as clockwise (not counter-clockwise as it would in Cartesian space). The actual coordinates the rotation is producing are the same.
Rotations are relative, so they don't really "start" from any specified position.
If you are using maths functions like sin or cos or atan2, then absolute angles always start from the X+ axis as zero radians, and the positive rotation direction rotates towards Y+.
The order of operations of SpriteBatch looks something like this:
Sprite starts as a quad with the top-left corner at (0,0), its size being the same as its texture size (or SourceRectangle).
Translate the sprite back by its origin (thus placing its origin at (0,0)).
Scale the sprite
Rotate the sprite
Translate the sprite by its position
Apply the matrix from SpriteBatch.Begin
This places the sprite in Client Space.
Finally a matrix is applied to each batch to transform that Client Space into the Projection Space used by the GPU. (Projection space is from (-1,-1) at the bottom left of the viewport, to (1,1) in the top right.)
Since you are new to XNA, allow me to introduce a library that will greatly help you out while you learn. It is called XNA Debug Terminal and is an open source project that allows you to run arbitrary code during runtime. So you can see if your variables have the value you expect. All this happens in a terminal display on top of your game and without pausing your game. It can be downloaded at http://www.protohacks.net/xna_debug_terminal
It is free and very easy to setup so you really have nothing to lose.

Resources