All the docs I've seen for the ArcTo instruction for a SVG <path/>'s d attribute give the first two arguments as the x and y radius of the arc.
Earlier, though, I was playing around, and in FF8 and Safari 5, it seemed like the path
<path d="M 100 100 A 50 50 0 0 0 200 100 Z"/>
and the path
<path d="M 100 100 A 1 1 0 0 0 200 100 Z"/>
rendered identically. From a bit more playing it seemed like what was really being used was the ratio between rx and ry. This makes sense (since what else are you going to do if the current position is more than 2r away), but is it officially documented anywhere?
It'd be nice if I could rely on this behaviour so I didn't have to manually calculate the x and y radius and instead just state their ratio.
Per the SVG specification: If rx, ry are such that there is no solution (basically, the ellipse is not big enough to reach from (x1, y1) to (x2, y2)) then the ellipse is scaled up uniformly until there is exactly one solution (until the ellipse is just big enough).
Related
Using GDI+ in Delphi 10.2.3: I have an elliptical (not circular) arc drawn from a rectangular RectF and defined start and swept angles using DrawArcF. I need to be able to find any point along the centerline of the arc (regardless of pen width) based just on the degrees of the point - e.g., if the arc starts at 210 for 120 degrees, I need to find the point at, say, 284 degrees, relative to the RectF.
In this case, the aspect ratio of the rectangle remains constant regardless of its size, so the shape of the arc should remain consistent as well, if that makes a difference.
Any ideas on how to go about this?
Parametric equation for axis-aligned ellipse centered at cx, cy with semiaxes a,b against angle Fi is:
t = ArcTan2(a * Sin(Fi), b * Cos(Fi))
x = cx + a * Cos(t)
y = cy + b * Sin(t)
(I used atan2 to get rid off atan range limitation/sign issues)
Note that parameter t runs through the same range 0..2*Pi but differs from true angle Fi (they coincide at angles k*Pi/2).
Picture of Fi/t ratio for b/a=0.6 from Mathworld (near formula 58)
I was trying to write some code for animations, and started wondering if scaling equally in x and y directions is equivalent to translation in the z axis. Visually, they seem to be the same, but is there any mathematical proof for it?
It depends on the perspective. For instance if you translate by 2 along the z axis then it could double the size of the object or it could scale it by only a quarter or even smaller. It depends where the "camera" is in the space you are woking in. Scaling by 2 on the x and y will always increase the size to double what it started as.
Mathematically are they the same? Not remotely.
Think about what you are doing in 3D space. If you start with a piece of paper on a table that is 10cm by 10cm.
Now scale it in x and y to 2x.
It is still on the table but now 20x20cm.
Now if you had translated along z axis the paper is still 10x10cm but now hovering above the table.
It looks bigger but only because it is closer and you can see clearly that the same thing has not happened in both cases.
Take sample points (10,10), (20,0), (20,40), (20,20).
In Matlab polyfit returns slope 1, but for the same data openCV fitline returns slope 10.7. From hand calculations the near vertical line (slope 10.7) is a much better least squares fit.
How come we’re getting different lines from the two libraries?
OpenCV code - (on iOS)
vector<cv::Point> vTestPoints;
vTestPoints.push_back(cv::Point( 10, 10 ));
vTestPoints.push_back(cv::Point( 20, 0 ));
vTestPoints.push_back(cv::Point( 20, 40 ));
vTestPoints.push_back(cv::Point( 20, 20 ));
Mat cvTest = Mat(vTestPoints);
cv::Vec4f testWeight;
fitLine( cvTest, testWeight, CV_DIST_L2, 0, 0.01, 0.01);
NSLog(#"Slope: %.2f",testWeight[1]/testWeight[0]);
xcode Log shows
2014-02-12 16:14:28.109 Application[3801:70b] Slope: 10.76
Matlab code
>> px
px = 10 20 20 20
>> py
py = 10 0 20 40
>> polyfit(px,py,1)
ans = 1.0000e+000 -2.7733e-014
MATLAB is trying to minimise the error in y for an given input x (i.e. as if x is your independent and y your dependant variable).
In this case, the line that goes through the points (10,10) and (20,20) is probably the best bet. A near vertical line that goes close to all three points with x=20 would have a very large error if you tried to calculate a value for y given x=10.
Although I don't recognise the OpenCV syntax, I'd guess that CV_DIST_L2 is a distance metric that means you're trying to minimise overall distance between the line and each point in the x-y plane. In that case a more vertical line which passes through the middle of the point set would be the closest.
Which is "correct" depends on what your points represent.
in the following link http://homepages.inf.ed.ac.uk/rbf/HIPR2/linedet.htm it is being told that for detecting lines we need to specify the width and angle of line - "to detect the presence of lines of a particular width n, at a particular orientation theta angel". The example convolution kernels are given for 0,90,45,135 orientation and width is single pixel.
My problem of understanding is, how will the convolution kernel change is I want thicker lines, means width of 3 or 5 or 7 pixel in 90 or 0 or 45 or 135 degree. What if, I want to change the angels also, how will I change the convolution kernel?
I am new in image processing so have less understanding. Please a tutorial or some of help will be appreciated.
For thicker lines, you need a larger kernel in the conventions of your link. You will need more 2's to detect lines of the width you are looking for. For a 3-pixel width horizontal line, you will need the following kernel.
-1 -1 -1 -1 -1
2 2 2 2 2
2 2 2 2 2
2 2 2 2 2
-1 -1 -1 -1 -1
and so on, depending on angles and widths.
if you want a kernel for other orientations than 0, 40, 90, and 135 degrees, it's more complicated than kernels for 0,40, 90 and 135 orientation. There are some other methods you can use. For example, http://en.wikipedia.org/wiki/Hough_transform.
I'm using XNA (which uses DirectX) for some graphical programming. I had a box rotating around a point, but the rotations are a bit odd.
Everything seems like someone took a compass and rotated it 180 degrees, so that N is 180, W is 90, etc..
I can't quite seem to find a source that states the orientation, so i'm probably just not using the right keywords.
Can someone help me find what XNA/DirectX's orientation is, and a page that states this too?
DirectX uses a left-handed coordinate system.
XNA
Uses a right-handed coordinate system.
Forward is -Z, backward is +Z. Forward points into the screen.
Right is +X, left is -X. Right points to the right-side of the screen.
Up is +Y, down is -Y. Up points to the top of the screen.
Matrix layout is as follows (using an identity matrix in this example). XNA uses a row layout for its matrices. The first three rows represent orientation. The last row and first three columns ([4, 1], [4, 2], and [4, 3]) represent translation/position. Here is documentation on XNA's Matrix Structure.
In the case of a translation matrix (translation is position and rotation combined):
Right 1 0 0 0
Up 0 1 0 0
Forward 0 0 -1 0
Pos 0 0 0 1