How to find a point along an vector at an variable distance - ios

I need to find the value along a vector for a given x coordinate. Like so;
I know the values of A, B and C. All of these value are variable. I need to calculate X. I know this is possable I just can't remember my trigonometry lessons.
I'm aware of similar questions like this one but it only finds the mid-point.
Thank you.

Lets say A(x1,y1) and B(x2,y2)
and co-ordinates of X(x,y) , then:
y = ((y2-y1)/(x2-x1))x + c .....(1)
where c is the y intercept, which in this case is 0.

y = ||C-A|| / ||D-A||
Z = (B - A) * y
Where y = length of vector C minus vector A, divided by length of D(unlabeled original length along x axis) minus vector A

For a line through the origin, as you have in the picture, you can use the idea of similar triangles:
X_y = B_y * (X_x/B_x)
Or, for the numbers shown in the example, X_y = 50, and X=(50,50).
To understand this, similar triangles says:
X_y/X_x = B_y/B_x
since triangles with similar shapes (ie, that have the same angles), have the same ratios; and the first formula is just solving the second to give X_y.
(If the line isn't through the origin, first subtract A from everything, then calculate X_y as above, then add A to everything.)

Related

How to implement an "Excel like" "add cell" into a canvas

I'm sorry for the title, that maybe can't describe properly what I would like to achieve. I'm starting to develop a new software which should present a "grid" to the user that can be manipulated by him adding "rows" or "columns" in any point of this "grid". The problem is that I'm not sure a real grid is the suitable solution, because there are some "graphical" requirements like changing invididual cells sizes, nesting them, zooming/stretching, etc. So I was starting to analyze a solution in WPF that uses DrawingVisual elements (for performance reason).
I'm able to draw the "grid" in the desired way. I'm also able to add rows or columns at the edges of the drawing. But I can't figure any solution to modify it in the "middle" (except redrawing the whole thing). I'll explain me better with an image. On the left there's the "grid" after it has been drawn for the first time. On the right there's a new grid that should be drawn after the user performs an operation.
An more complex example is the following, where the "row" is added inside an existing cell, causing all the cells to "grow".
As I said, I know I could redraw the whole thing, but I'm concerned about performance. Keep in mind that in a real scenario there could be thousands of blocks and many nesting levels.
Any suggestion is appreciated. The use of WPF is not mandatory, but it will be a desktop app in .NET 5.0. The use of a DrawingVisual is neither mandatory. I can evaluate any solution. Thank you.
A simple technique is to keep positions of columns relative to the left of the canvas in a variable when you first draw the tables. When you want to add a new column, you can crop the image from that point, and in a larger canvas, copy the left and right pieces and just draw the middle column from the beginning.
Of course, the coordinates of each column can be calculated with image processing techniques, but it reduces performance.
I wrote this code with Python, but I do not think it would be difficult to convert it to C#.
import cv2
import numpy as np
# copy image over another
def imdraw(im, over, x, y):
y1, y2 = y, y + over.shape[0]
x1, x2 = x, x + over.shape[1]
for c in range(0, 3):
im[y1:y2, x1:x2, c] = over[:, :, c]
return im
pt = 220
col = 300
off = 15
im = cv2.imread("grid.png", 1)
h, w = im.shape[:2]
crop_left = im[0 : 0 + h, 0:pt]
crop_right = im[0 : 0 + h, pt:w]
cv2.imwrite("left.jpg", crop_left)
cv2.imwrite("right.jpg", crop_right)
# Create an Empty image with white background
out = 255 * np.ones(shape=[h, w + col, 3], dtype=np.uint8)
out = imdraw(out, crop_left, 0, 0)
out = imdraw(out, crop_right, pt + col, 0)
out = cv2.rectangle(
out,
pt1=(pt + off, off),
pt2=(pt + col - off, h - off),
color=(128, 0, 200),
thickness=5,
lineType=cv2.LINE_AA,
)
cv2.imwrite("out.jpg", out)
Output:

Pixel-perfect collisions in Monogame, with float positions

I want to detect pixel-perfect collisions between 2 sprites.
I use the following function which I have found online, but makes total sense to me.
static bool PerPixelCollision(Sprite a, Sprite b)
{
// Get Color data of each Texture
Color[] bitsA = new Color[a.Width * a.Height];
a.Texture.GetData(0, a.CurrentFrameRectangle, bitsA, 0, a.Width * a.Height);
Color[] bitsB = new Color[b.Width * b.Height];
b.Texture.GetData(0, b.CurrentFrameRectangle, bitsB, 0, b.Width * b.Height);
// Calculate the intersecting rectangle
int x1 = (int)Math.Floor(Math.Max(a.Bounds.X, b.Bounds.X));
int x2 = (int)Math.Floor(Math.Min(a.Bounds.X + a.Bounds.Width, b.Bounds.X + b.Bounds.Width));
int y1 = (int)Math.Floor(Math.Max(a.Bounds.Y, b.Bounds.Y));
int y2 = (int)Math.Floor(Math.Min(a.Bounds.Y + a.Bounds.Height, b.Bounds.Y + b.Bounds.Height));
// For each single pixel in the intersecting rectangle
for (int y = y1; y < y2; ++y)
{
for (int x = x1; x < x2; ++x)
{
// Get the color from each texture
Color colorA = bitsA[(x - (int)Math.Floor(a.Bounds.X)) + (y - (int)Math.Floor(a.Bounds.Y)) * a.Texture.Width];
Color colorB = bitsB[(x - (int)Math.Floor(b.Bounds.X)) + (y - (int)Math.Floor(b.Bounds.Y)) * b.Texture.Width];
if (colorA.A != 0 && colorB.A != 0) // If both colors are not transparent (the alpha channel is not 0), then there is a collision
{
return true;
}
}
}
//If no collision occurred by now, we're clear.
return false;
}
(all the Math.floor are useless, I copied this function from my current code where I'm trying to make it work with floats).
It reads the color of the sprites in the rectangle portion that is common to both sprites.
This actually works fine, when I display the sprites at x/y coordinates where x and y are int's (.Bounds.X and .Bounds.Y):
View an example
The problem with displaying sprites at int's coordinates is that it results in a very jaggy movement in diagonals:
View an example
So ultimately I would like to not cast the sprite position to int's when drawing them, which results in a smooth(er) movement:
View an example
The issue is that the PerPixelCollision works with ints, not floats, so that's why I added all those Math.Floor. As is, it works in most cases, but it's missing one line and one row of checking on the bottom and right (I think) of the common Rectangle because of the rounding induced by Math.Floor:
View an example
When I think about it, I think it makes sense. If x1 is 80 and x2 would actually be 81.5 but is 81 because of the cast, then the loop will only work for x = 80, and therefore miss the last column (in the example gif, the fixed sprite has a transparent column on the left of the visible pixels).
The issue is that no matter how hard I think about this, or no matter what I try (I have tried a lot of things) - I cannot make this work properly. I am almost convinced that x2 and y2 should have Math.Ceiling instead of Math.Floor, so as to "include" the last pixel that otherwise is left out, but then it always gets me an index out of the bitsA or bitsB arrays.
Would anyone be able to adjust this function so that it works when Bounds.X and Bounds.Y are floats?
PS - could the issue possibly come from BoxingViewportAdapter? I am using this (from MonoExtended) to "upscale" my game which is actually 144p.
Remember, there is no such thing as a fractional pixel. For movement purposes, it completely makes sense to use floats for the values and cast them to integer pixels when drawn. The problem is not in the fractional values, but in the way that they are drawn.
The main reason the collisions are not appearing to work correctly is the scaling. The colors for the new pixels in between the diagonals get their colors by averaging* the surrounding pixels. The effect makes the image appear larger than the original, especially on the diagonals.
*there are several methods that may be used for the scaling, bi-cubic and linear are the most common.
The only direct(pixel perfect) solution is to compare the actual output after scaling. This requires rendering the entire screen twice, and requires the scale factor more computations. (not recommended)
Since you are comparing the non-scaled images your collisions appear to be off.
The other issue is movement speed. If you are moving faster than one pixel per Update(), detecting per pixel collisions is not enough, if the movement is to be restricted by the obstacle. You must resolve the collision.
For enemies or environmental hazards your original code is sufficient and collision resolution is not required. It will give the player a minor advantage.
A simple resolution algorithm(see below for a mathematical solution) is to unwind the movement by half, check for collision. If it is still colliding, unwind the movement by a quarter, otherwise advance it by a quarter and check for collision. Repeat until the movement is less than 1 pixel. This runs log of Speed times.
As for the top wall not colliding perfectly: If the starting Y value is not a multiple of the vertical movement speed, you will not land perfectly on zero. I prefer to resolve this by setting the Y = 0, when Y is negative. It is the same for X, and also when X and Y > screen bounds - origin, for the bottom and right of the screen.
I prefer to use mathematical solutions for collision resolution. In your example images, you show a box colliding with a diamond, the diamond shape is represented mathematically as the Manhattan distance(Math.Abs(x1-x2) + Math.Abs(y1-y2)). From this fact, it is easy directly calculate the resolution to the collision.
On optimizations:
Be sure to check that the bounding Rectangles are overlapping before calling this method.
As you have stated, remove all Math.Floors, since, the cast is sufficient. Reduce all calculations inside of the loops not dependent on the loop variable outside of the loop.
The (int)a.Bounds.Y * a.Texture.Width and (int)b.Bounds.Y * b.Texture.Width are not dependent on the x or y variables and should be calculated and stored before the loops. The subtractions 'y-[above variable]` should be stored in the "y" loop.
I would recommend using a bitboard(1 bit per 8 by 8 square) for collisions. It reduces the broad(8x8) collision checks to O(1). For a resolution of 144x144, the entire search space becomes 18x18.
you can wrap your sprite with a rectangle and use its function called Intersect,which detedct collistions.
Intersect - XNA

Highcharts Vector Plot with connected vectors of absolute length

Scenario: I need to draw a plot that has a background image. Based on the information on that image there have to be multiple origins (let's call them 'targets') that can move over time. The movements of these targets will have to be indicated by arrows/vectors where the first vector originates at the location of the target, the second vector originates where the previous vector ended and so on.
The result should look similar to this:
Plot with targets and movement vectors
While trying to implement this, i stumbled upon different questions:
I would use a chart with combined series: a Scatter plot to add the targets at exact x/y locations and a vector plot to insert the vectors. Would this be a correct way?
Since i want to set each vectors starting point to exact x/y coordinates i use rotationOrigin: 'start'. When i now change vectorLength to something other than 20 the vector is still shifted by 10 pixels (http://jsfiddle.net/Chop_Suey/cx35ptrh/) this looks like a bug to me. Can it be fixed or is there a workaround?
When i define a vector it looks like [x, y, length, direction]. But length is a relative unit that is calculated with some magic relative to the longest vector which is 20 (pixels) by default or whatever i set vectorLength to. Thus, the vectors are not connected and the space between them changes depending on plot size axes min/max). I actually want to corellate the length with the plot axis (which might be tricky since the x-axis and y-axis might have different scales). A workaround could be to add a redraw event and recalculate the vectors on every resize and set the vectorLength to the currently longest vector (which again can be calculated to correlate to the axes). This is very cumbersome and i would prefer to be able to set the vectors somehow like [x1, y1, x2, y2] where (x1/y2) denotes the starting- and (x2/y2) the ending-point of the vector. Is this possible somehow? any recommendations?
Since the background image is not just a decoration but relevant for the displayed data to make sense it should change when i zoom in. Is it possible to 'lock' the background image to the original plot min/max so that when i zoom in, the background image is also zoomed (image quality does not matter)?
Combining these two series shoudn't be problematic at all, and that will be the correct way, but it is necessary to change the prototype functions a bit for that the vectors will draw in a different way. Here is the example: https://jsfiddle.net/6vkjspoc/
There is probably the bug in this module and we will report it as new issue as soon as it is possible. However, we made the workaround (or fix) for that and now it's working well, what you can notice in example above.
Vector length is currently calculated using scale, namely - if vectorLength value is equal to 100 (for example), and vector series has two points which looks like that:
{
type: 'vector',
vectorLength: 100,
rotationOrigin: 'start',
data: [
[1, 50000, 1, 120],
[1, 50000, 2, -120]
]
}
Then the highest length of all points is taken and basing on it the scale is calculated for each point, so first one length is equal to 50, because the algorithm is point.length / lengthMax, what you can deduce from the code below:
H.seriesTypes.vector.prototype.arrow = function(point) {
var path,
fraction = point.length / this.lengthMax,
u = fraction * this.options.vectorLength / 20,
o = {
start: 10 * u,
center: 0,
end: -10 * u
}[this.options.rotationOrigin] || 0;
// The stem and the arrow head. Draw the arrow first with rotation 0,
// which is the arrow pointing down (vector from north to south).
path = [
'M', 0, 7 * u + o, // base of arrow
'L', -1.5 * u, 7 * u + o,
0, 10 * u + o,
1.5 * u, 7 * u + o,
0, 7 * u + o,
0, -10 * u + o // top
];
return path;
}
Regarding your question about defining start and end of a vector by two x, y values, you need to refactor entire series code, so that it won't use the vectorLength at all as like as scale, because you will define the points length. I suspect that will be very complex solution, so you can try to do it by yourself, and let me know about the results.
In order to make it works, you need to recalculate and update vectorLength of your vector series inside of chart.events.selection handler. Here is the example: https://jsfiddle.net/nh7b6qx9/

Lemniscate of Bernoulli in Objective-C

I found a very interesting thread in the GameDev side, link below:
https://gamedev.stackexchange.com/a/43704
I would like to implement this formula to draw a eight/infinity sign into in view, I don't see how I can do this.
Someone can give a clue to start the code?
Thanks for reading,
Given the parametric representation
scale = 2 / (3 - cos(2*t));
x = scale * cos(t);
y = scale * sin(2*t) / 2;
it's quite straightforward to write the code that draws the figure. What you do is start the variable t at 0, and increment it in a loop by a small value (say 0.05) each iteration until it reaches 2*PI. At each step, draw a line from the previous (x, y) point to the next calculated point. This will be a short line for each step, but together they will form the curved figure.
You can fiddle with the increment value to generate a figure that looks good for your application.

Scaling of the point sprites (Direc3D 9)

Tell me please what values should ​​I set for D3DRS_POINTSCALE_A, D3DRS_POINTSCALE_B, D3DRS_POINTSCALE_С to point sprites scaled just like other objects in the scene. The parameters A = 0 B = 0 and C = 1 (proposed by F. D. Luna) not suitable because the scale is not accurate enough and the distance between the particles (point sprites) can be greater than it should be. If I replace the point sprites to billboards, the scale of the particles is correct, but the rendering is much slower. Help me please because the speed of rendering particles for my task is very important but the precise of their scale is very important too.
Direct3D computes the screen-space point size according to the following formula:
MSDN - Point Sprites I can not understand what values ​​should be set for A, B, C to scaling was correct
P.S. Sorry for my english I'm from Russia
Directx uses this function to determine scaled size of a point:
out_scale = viewport_height * in_scale * sqrt( 1/( A + B * eye_distance + C * (eye_distance^2) ) )
eye_distance is generated by:
eye_position = sqrt(X^2 + Y^2 + Z^2)
So to answer your question:
D3DRS_POINTSCALE_A is the constant
D3DRS_POINTSCALE_B is the Linear element (scales eye_distance) and
D3DRS_POINTSCALE_C is the quadratic element (scales eye_distance squared).

Resources