Highcharts Vector Plot with connected vectors of absolute length - highcharts

Scenario: I need to draw a plot that has a background image. Based on the information on that image there have to be multiple origins (let's call them 'targets') that can move over time. The movements of these targets will have to be indicated by arrows/vectors where the first vector originates at the location of the target, the second vector originates where the previous vector ended and so on.
The result should look similar to this:
Plot with targets and movement vectors
While trying to implement this, i stumbled upon different questions:
I would use a chart with combined series: a Scatter plot to add the targets at exact x/y locations and a vector plot to insert the vectors. Would this be a correct way?
Since i want to set each vectors starting point to exact x/y coordinates i use rotationOrigin: 'start'. When i now change vectorLength to something other than 20 the vector is still shifted by 10 pixels (http://jsfiddle.net/Chop_Suey/cx35ptrh/) this looks like a bug to me. Can it be fixed or is there a workaround?
When i define a vector it looks like [x, y, length, direction]. But length is a relative unit that is calculated with some magic relative to the longest vector which is 20 (pixels) by default or whatever i set vectorLength to. Thus, the vectors are not connected and the space between them changes depending on plot size axes min/max). I actually want to corellate the length with the plot axis (which might be tricky since the x-axis and y-axis might have different scales). A workaround could be to add a redraw event and recalculate the vectors on every resize and set the vectorLength to the currently longest vector (which again can be calculated to correlate to the axes). This is very cumbersome and i would prefer to be able to set the vectors somehow like [x1, y1, x2, y2] where (x1/y2) denotes the starting- and (x2/y2) the ending-point of the vector. Is this possible somehow? any recommendations?
Since the background image is not just a decoration but relevant for the displayed data to make sense it should change when i zoom in. Is it possible to 'lock' the background image to the original plot min/max so that when i zoom in, the background image is also zoomed (image quality does not matter)?

Combining these two series shoudn't be problematic at all, and that will be the correct way, but it is necessary to change the prototype functions a bit for that the vectors will draw in a different way. Here is the example: https://jsfiddle.net/6vkjspoc/
There is probably the bug in this module and we will report it as new issue as soon as it is possible. However, we made the workaround (or fix) for that and now it's working well, what you can notice in example above.
Vector length is currently calculated using scale, namely - if vectorLength value is equal to 100 (for example), and vector series has two points which looks like that:
{
type: 'vector',
vectorLength: 100,
rotationOrigin: 'start',
data: [
[1, 50000, 1, 120],
[1, 50000, 2, -120]
]
}
Then the highest length of all points is taken and basing on it the scale is calculated for each point, so first one length is equal to 50, because the algorithm is point.length / lengthMax, what you can deduce from the code below:
H.seriesTypes.vector.prototype.arrow = function(point) {
var path,
fraction = point.length / this.lengthMax,
u = fraction * this.options.vectorLength / 20,
o = {
start: 10 * u,
center: 0,
end: -10 * u
}[this.options.rotationOrigin] || 0;
// The stem and the arrow head. Draw the arrow first with rotation 0,
// which is the arrow pointing down (vector from north to south).
path = [
'M', 0, 7 * u + o, // base of arrow
'L', -1.5 * u, 7 * u + o,
0, 10 * u + o,
1.5 * u, 7 * u + o,
0, 7 * u + o,
0, -10 * u + o // top
];
return path;
}
Regarding your question about defining start and end of a vector by two x, y values, you need to refactor entire series code, so that it won't use the vectorLength at all as like as scale, because you will define the points length. I suspect that will be very complex solution, so you can try to do it by yourself, and let me know about the results.
In order to make it works, you need to recalculate and update vectorLength of your vector series inside of chart.events.selection handler. Here is the example: https://jsfiddle.net/nh7b6qx9/

Related

Positioning and Resizing images in different aspect ratios keeping the same position relative to the screen

the problem I'm facing is more a matter of logic and algorithm than a specific language functionalities, i'm coding it in lua, but i believe it could be replicated in other languages with no major problems.
First of all, I'm going to show you some properties and default settings that i'm having to use to come up with a solution.
1. I have a general function that displays an image on the screen, given the X, Y position and W, H dimension, to facilitate understanding, this function is drawImage(x, y, w, h)
2. All values ​​and calculation will be based on a default resolution and aspect ratio, which in this case is the developer's. These variables will be these: DEV_SCREEN_W = 1366, DEV_SCREE_H = 768, (aspect ratio is 16:9)
3. So far, we have a function that displays an image on the screen, and default screen values ​​to which the X, Y position and W, H dimensions of a given image will be set.
4. Now, we have the CLIENT, which can be anyone, with any resolution and aspect ratio, this client will run the code on his computer.
5. Knowing this, we need to make an algorithm, so that the positions and dimensions of the image stay relatively the same regardless of the screen being used to show it.
Knowing these properties and definitions we can proceed with the problem. Let's assume that me as a developer, having a screen whose values are DEV_SCREEN_W = 1366, DEV_SCREE_H = 768 i want to set an image at position X = 352, Y = 243 with W = 900, H = 300. So At the developer screen, i'll have this:
Okay, now let's add one more image, with position and dimension X = 352, Y = 458, W = 193, H = 69
Okay, now we need to write an algorithm that keeps the same dimension and position on the screen regardless of size, as W and H are different for each resolution, we can't use pixel points to define, my solution was to define the position between 0 and 1, so the position would represent a certain percentage of the screen, the same for the W and H.
Let's suppose i get the screen information from the client and I get CLIENT_SCREEN_W = 1280, CLIENT_SCREEN_H = 720.
Since it's the same aspect ratio, I could apply this concept to both position and dimension as it would remain perfectly proportional to the screen, so i would have:
Getting the percentage based on the DEV screen for BOTH images would be like:
X = 352/DEV_SCREEN_W * CLIENT_SCREEN_W,
Y = 243/DEV_SCREEN_H * CLIENT_SCREEN_H,
W = 900/DEV_SCREEN_W * CLIENT_SCREEN_W,
H = 300/DEV_SCREEN_H * CLIENT_SCREEN_H,
Basically, for those who didn't understand what is happening, i get the data of how many % position in pixels represents from the developer's screen (X = 352/DEV_SCREEN_W) that is (X = 352/1366 = 0.2576) and multiply this result by the W of the client screen: 0.2576 * CLIENT_SCREEN_W, that is 0.2576 * 1280 = 329. Thus, we concluded that 329 and 352 are relatively the same position in different resolutions.
Following this concept, no matter what resolution the client uses, the images are always in the same proportion, both in position and in dimension, ONLY IF IT IS IN RATIO 16:9 (the same as the developer)
And this is where the problem arises, applying this same concept to any ratio, on a 4:3 screen the both image would be stretched:
despite keeping the same X and Y relative to the screen, the W and H had to be altered out of proportion to fit the screen, obtaining the result seen above, which cannot happen.
To avoid this, i set a proportion rate, which i get by dividing the client's screen by the dev's, thus getting how much of one represents the other, and i multiply that by W and H of both images so that both are proportionately resized to their original dimension, instead of multiplying by a relative value between 0 - 1 arbitrarily.
Getting the proportion would be like:
PROPORTION_RATE = CLIENT_SCREEN_W/DEV_SCREEN_W
Applying it:
W = 900*PROPORTION_RATE, H = 300*PROPORTION_RATE
Basically, this multiplication for aspect ratio, makes the image stay in the exact proportion of the screen resizing it, however, applying this, the images lose their relative position, as seen in the image below:
As you can see, despite keeping the same proportion in W and H, the image lost its structural organization in relation to the original position defined on the developer's screen.
I've been in this problem for a while
The closest I got was to add on the Y and X axis how much a certain image has decreased, however, if i do that both images will be corrected, but they would still be out of relative position between them, as shown in the image below:
[]
This problem of logic and algorithm is a little beyond my applicable knowledge, alone I can't find a solution, so I sincerely ask for help, or direction to the way where I can solve it.
Based on your comments, you want to scale both axes by the same amount, and you want to handle ratio mismatch by adding empty stripes at window edges as needed.
First you compute the scale: scale = min(w2/w1, h2/h1), where w1,h1 is the source size, and w2,h2 is the target size.
Next, assuming 0,0 is in the center of the screen, you can just do x2 = x1*scale, y2 = y1*scale, where x1,y1 are source coordinates and x2,y2 are converted coordinates.
But if 0,0 is in a corner of the screen for you (which is more probable), you have to do something like:
offset_x = (w2 - w1 * scale) / 2
offset_y = (h2 - h1 * scale) / 2
Then:
x2 = x1 * scale + offset_x
y2 = y1 * scale + offset_y

Placing a shape inside another shape using opencv

I have two images and I need to place the second image inside the first image. The second image can be resized, rotated or skewed such that it covers a larger area of the other images as possible. As an example, in the figure shown below, the green circle need to be placed inside the blue shape:
Here the green circle is transformed such that it covers a larger area. Another example is shown below:
Note that there may be some multiple results. However, any similar result is acceptable as shown in the above example.
How do I solve this problem?
Thanks in advance!
I tested the idea I mentioned earlier in the comments and the output is almost good. It may be better but it takes time. The final code was too much and it depends on one of my old personal projects, so I will not share. But I will explain step by step how I wrote such an algorithm. Note that I have tested the algorithm many times. Not yet 100% accurate.
for N times do this:
1. Copy from shape
2. Transform it randomly
3. Put the shape on the background
4-1. It is not acceptable if the shape exceeds the background. Go to
the first step.
4.2. Otherwise we will continue to step 5.
5. We calculate the length, width and number of shape pixels.
6. We keep a list of the best candidates and compare these three
parameters (W, H, Pixels) with the members of the list. If we
find a better item, we will save it.
I set the value of N to 5,000. The larger the number, the slower the algorithm runs, but the better the result.
You can use anything for Transform. Mirror, Rotate, Shear, Scale, Resize, etc. But I used warpPerspective for this one.
im1 = cv2.imread(sys.path[0]+'/Back.png')
im2 = cv2.imread(sys.path[0]+'/Shape.png')
bH, bW = im1.shape[:2]
sH, sW = im2.shape[:2]
# TopLeft, TopRight, BottomRight, BottomLeft of the shape
_inp = np.float32([[0, 0], [sW, 0], [sW, sH], [0, sH]])
cx = random.randint(5, sW-5)
ch = random.randint(5, sH-5)
o = 0
# Random transformed output
_out = np.float32([
[random.randint(-o, cx-1), random.randint(1-o, ch-1)],
[random.randint(cx+1, sW+o), random.randint(1-o, ch-1)],
[random.randint(cx+1, sW+o), random.randint(ch+1, sH+o)],
[random.randint(-o, cx-1), random.randint(ch+1, sH+o)]
])
# Transformed output
M = cv2.getPerspectiveTransform(_inp, _out)
t = cv2.warpPerspective(shape, M, (bH, bW))
You can use countNonZero to find the number of pixels and findContours and boundingRect to find the shape size.
def getSize(msk):
cnts, _ = cv2.findContours(msk, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
cnts.sort(key=lambda p: max(cv2.boundingRect(p)[2],cv2.boundingRect(p)[3]), reverse=True)
w,h=0,0
if(len(cnts)>0):
_, _, w, h = cv2.boundingRect(cnts[0])
pix = cv2.countNonZero(msk)
return pix, w, h
To find overlaping of back and shape you can do something like this:
make a mask from back and shape and use bitwise methods; Change this section according to the software you wrote. This is just an example :)
mskMix = cv2.bitwise_and(mskBack, mskShape)
mskMix = cv2.bitwise_xor(mskMix, mskShape)
isCandidate = not np.any(mskMix == 255)
For example this is not a candidate answer; This is because if you look closely at the image on the right, you will notice that the shape has exceeded the background.
I just tested the circle with 4 different backgrounds; And the results:
After 4879 Iterations:
After 1587 Iterations:
After 4621 Iterations:
After 4574 Iterations:
A few additional points. If you use a method like medianBlur to cover the noise in the Background mask and Shape mask, you may find a better solution.
I suggest you read about Evolutionary Computation, Metaheuristic and Soft Computing algorithms for better understanding of this algorithm :)

How to implement an "Excel like" "add cell" into a canvas

I'm sorry for the title, that maybe can't describe properly what I would like to achieve. I'm starting to develop a new software which should present a "grid" to the user that can be manipulated by him adding "rows" or "columns" in any point of this "grid". The problem is that I'm not sure a real grid is the suitable solution, because there are some "graphical" requirements like changing invididual cells sizes, nesting them, zooming/stretching, etc. So I was starting to analyze a solution in WPF that uses DrawingVisual elements (for performance reason).
I'm able to draw the "grid" in the desired way. I'm also able to add rows or columns at the edges of the drawing. But I can't figure any solution to modify it in the "middle" (except redrawing the whole thing). I'll explain me better with an image. On the left there's the "grid" after it has been drawn for the first time. On the right there's a new grid that should be drawn after the user performs an operation.
An more complex example is the following, where the "row" is added inside an existing cell, causing all the cells to "grow".
As I said, I know I could redraw the whole thing, but I'm concerned about performance. Keep in mind that in a real scenario there could be thousands of blocks and many nesting levels.
Any suggestion is appreciated. The use of WPF is not mandatory, but it will be a desktop app in .NET 5.0. The use of a DrawingVisual is neither mandatory. I can evaluate any solution. Thank you.
A simple technique is to keep positions of columns relative to the left of the canvas in a variable when you first draw the tables. When you want to add a new column, you can crop the image from that point, and in a larger canvas, copy the left and right pieces and just draw the middle column from the beginning.
Of course, the coordinates of each column can be calculated with image processing techniques, but it reduces performance.
I wrote this code with Python, but I do not think it would be difficult to convert it to C#.
import cv2
import numpy as np
# copy image over another
def imdraw(im, over, x, y):
y1, y2 = y, y + over.shape[0]
x1, x2 = x, x + over.shape[1]
for c in range(0, 3):
im[y1:y2, x1:x2, c] = over[:, :, c]
return im
pt = 220
col = 300
off = 15
im = cv2.imread("grid.png", 1)
h, w = im.shape[:2]
crop_left = im[0 : 0 + h, 0:pt]
crop_right = im[0 : 0 + h, pt:w]
cv2.imwrite("left.jpg", crop_left)
cv2.imwrite("right.jpg", crop_right)
# Create an Empty image with white background
out = 255 * np.ones(shape=[h, w + col, 3], dtype=np.uint8)
out = imdraw(out, crop_left, 0, 0)
out = imdraw(out, crop_right, pt + col, 0)
out = cv2.rectangle(
out,
pt1=(pt + off, off),
pt2=(pt + col - off, h - off),
color=(128, 0, 200),
thickness=5,
lineType=cv2.LINE_AA,
)
cv2.imwrite("out.jpg", out)
Output:

Pixel-perfect collisions in Monogame, with float positions

I want to detect pixel-perfect collisions between 2 sprites.
I use the following function which I have found online, but makes total sense to me.
static bool PerPixelCollision(Sprite a, Sprite b)
{
// Get Color data of each Texture
Color[] bitsA = new Color[a.Width * a.Height];
a.Texture.GetData(0, a.CurrentFrameRectangle, bitsA, 0, a.Width * a.Height);
Color[] bitsB = new Color[b.Width * b.Height];
b.Texture.GetData(0, b.CurrentFrameRectangle, bitsB, 0, b.Width * b.Height);
// Calculate the intersecting rectangle
int x1 = (int)Math.Floor(Math.Max(a.Bounds.X, b.Bounds.X));
int x2 = (int)Math.Floor(Math.Min(a.Bounds.X + a.Bounds.Width, b.Bounds.X + b.Bounds.Width));
int y1 = (int)Math.Floor(Math.Max(a.Bounds.Y, b.Bounds.Y));
int y2 = (int)Math.Floor(Math.Min(a.Bounds.Y + a.Bounds.Height, b.Bounds.Y + b.Bounds.Height));
// For each single pixel in the intersecting rectangle
for (int y = y1; y < y2; ++y)
{
for (int x = x1; x < x2; ++x)
{
// Get the color from each texture
Color colorA = bitsA[(x - (int)Math.Floor(a.Bounds.X)) + (y - (int)Math.Floor(a.Bounds.Y)) * a.Texture.Width];
Color colorB = bitsB[(x - (int)Math.Floor(b.Bounds.X)) + (y - (int)Math.Floor(b.Bounds.Y)) * b.Texture.Width];
if (colorA.A != 0 && colorB.A != 0) // If both colors are not transparent (the alpha channel is not 0), then there is a collision
{
return true;
}
}
}
//If no collision occurred by now, we're clear.
return false;
}
(all the Math.floor are useless, I copied this function from my current code where I'm trying to make it work with floats).
It reads the color of the sprites in the rectangle portion that is common to both sprites.
This actually works fine, when I display the sprites at x/y coordinates where x and y are int's (.Bounds.X and .Bounds.Y):
View an example
The problem with displaying sprites at int's coordinates is that it results in a very jaggy movement in diagonals:
View an example
So ultimately I would like to not cast the sprite position to int's when drawing them, which results in a smooth(er) movement:
View an example
The issue is that the PerPixelCollision works with ints, not floats, so that's why I added all those Math.Floor. As is, it works in most cases, but it's missing one line and one row of checking on the bottom and right (I think) of the common Rectangle because of the rounding induced by Math.Floor:
View an example
When I think about it, I think it makes sense. If x1 is 80 and x2 would actually be 81.5 but is 81 because of the cast, then the loop will only work for x = 80, and therefore miss the last column (in the example gif, the fixed sprite has a transparent column on the left of the visible pixels).
The issue is that no matter how hard I think about this, or no matter what I try (I have tried a lot of things) - I cannot make this work properly. I am almost convinced that x2 and y2 should have Math.Ceiling instead of Math.Floor, so as to "include" the last pixel that otherwise is left out, but then it always gets me an index out of the bitsA or bitsB arrays.
Would anyone be able to adjust this function so that it works when Bounds.X and Bounds.Y are floats?
PS - could the issue possibly come from BoxingViewportAdapter? I am using this (from MonoExtended) to "upscale" my game which is actually 144p.
Remember, there is no such thing as a fractional pixel. For movement purposes, it completely makes sense to use floats for the values and cast them to integer pixels when drawn. The problem is not in the fractional values, but in the way that they are drawn.
The main reason the collisions are not appearing to work correctly is the scaling. The colors for the new pixels in between the diagonals get their colors by averaging* the surrounding pixels. The effect makes the image appear larger than the original, especially on the diagonals.
*there are several methods that may be used for the scaling, bi-cubic and linear are the most common.
The only direct(pixel perfect) solution is to compare the actual output after scaling. This requires rendering the entire screen twice, and requires the scale factor more computations. (not recommended)
Since you are comparing the non-scaled images your collisions appear to be off.
The other issue is movement speed. If you are moving faster than one pixel per Update(), detecting per pixel collisions is not enough, if the movement is to be restricted by the obstacle. You must resolve the collision.
For enemies or environmental hazards your original code is sufficient and collision resolution is not required. It will give the player a minor advantage.
A simple resolution algorithm(see below for a mathematical solution) is to unwind the movement by half, check for collision. If it is still colliding, unwind the movement by a quarter, otherwise advance it by a quarter and check for collision. Repeat until the movement is less than 1 pixel. This runs log of Speed times.
As for the top wall not colliding perfectly: If the starting Y value is not a multiple of the vertical movement speed, you will not land perfectly on zero. I prefer to resolve this by setting the Y = 0, when Y is negative. It is the same for X, and also when X and Y > screen bounds - origin, for the bottom and right of the screen.
I prefer to use mathematical solutions for collision resolution. In your example images, you show a box colliding with a diamond, the diamond shape is represented mathematically as the Manhattan distance(Math.Abs(x1-x2) + Math.Abs(y1-y2)). From this fact, it is easy directly calculate the resolution to the collision.
On optimizations:
Be sure to check that the bounding Rectangles are overlapping before calling this method.
As you have stated, remove all Math.Floors, since, the cast is sufficient. Reduce all calculations inside of the loops not dependent on the loop variable outside of the loop.
The (int)a.Bounds.Y * a.Texture.Width and (int)b.Bounds.Y * b.Texture.Width are not dependent on the x or y variables and should be calculated and stored before the loops. The subtractions 'y-[above variable]` should be stored in the "y" loop.
I would recommend using a bitboard(1 bit per 8 by 8 square) for collisions. It reduces the broad(8x8) collision checks to O(1). For a resolution of 144x144, the entire search space becomes 18x18.
you can wrap your sprite with a rectangle and use its function called Intersect,which detedct collistions.
Intersect - XNA

How to find a point along an vector at an variable distance

I need to find the value along a vector for a given x coordinate. Like so;
I know the values of A, B and C. All of these value are variable. I need to calculate X. I know this is possable I just can't remember my trigonometry lessons.
I'm aware of similar questions like this one but it only finds the mid-point.
Thank you.
Lets say A(x1,y1) and B(x2,y2)
and co-ordinates of X(x,y) , then:
y = ((y2-y1)/(x2-x1))x + c .....(1)
where c is the y intercept, which in this case is 0.
y = ||C-A|| / ||D-A||
Z = (B - A) * y
Where y = length of vector C minus vector A, divided by length of D(unlabeled original length along x axis) minus vector A
For a line through the origin, as you have in the picture, you can use the idea of similar triangles:
X_y = B_y * (X_x/B_x)
Or, for the numbers shown in the example, X_y = 50, and X=(50,50).
To understand this, similar triangles says:
X_y/X_x = B_y/B_x
since triangles with similar shapes (ie, that have the same angles), have the same ratios; and the first formula is just solving the second to give X_y.
(If the line isn't through the origin, first subtract A from everything, then calculate X_y as above, then add A to everything.)

Resources