Immutable classes with calculated fields - dart

To initialize a final variable using logic that can’t be handled in the initializer list you can use a factory constructor. Even though that logic is a simple math operation, it apparently breaks Dart's const contract.
I am failing to find a way to use such an alternative constructor, i.e. allowing me to pass three points instead of the center and radius, while still keeping the immutability of the final object.
import 'dart:math';
class Circle {
final Point center;
final double radius;
const Circle(this.center, this.radius);
factory Circle.fromPoints(Point p1, Point p2, Point p3) {
Point center = _getCenter(p1, p2, p3);
return Circle(center, center.distanceTo(p1));
}
static Point _getCenter(Point p1, Point p2, Point p3) {
double centerX = ...
double centerY = ...
return Point(centerX, centerY);
}
I've also tried making that getCenter method public and calculating the center outside the constructor, but it again fails to compile as the calculated center is determined to be Null in the stage of initialization.
Point p1 = const Point(-1, 0);
Point p2 = const Point(0, 1);
Point p3 = const Point(1, 0);
Point center = Circle.getCenter(p1, p2, p2);
Circle circle = const Circle(center, center.distanceTo(p1));
But I still prefer having multiple constructors producing const objects instead of such workarounds.

You can get an immutable result, but not a constant one.
The restrictions on constant evaluation are such that you are not allowed to call any but a select few functions in const expressions (mainly operators on numbers). You can't even extract the x and y positions of a Point during constant evaluation. If you need more than what constant evaluation allows, you cannot do the computation as a constant expression. You also cannot pass non-constant values into a const constructor invocations.
That doesn't mean that the result isn't immutable. It's still a class with a two final values, each holding immutable values.
So, what you are doing is fine, and correct, except for putting const in front of Circle(center, center.distanceTo(p1)), because the second argument is not a constant expression.

Related

Qt 5 drawing translate(), rotate(), font fill issues

I'm writing my first Qt 5 application... This uses a third-party map library (QGeoView).
I need to draw an object (something like a stylized airplane) over this map. Following the library coding guidelines, I derived from the base class QGVDrawItem my QGVAirplane.
The airplane class contains heading and position values: such values must be used to draw the airplane on the map (of course in the correct position and with correct heading). The library requires QGVDrawItem derivatives to override three base class methods:
QPainterPath projShape() const;
void projPaint(QPainter* painter);
void onProjection(QGVMap* geoMap)
The first method is used to achieve the area of the map that needs to be updated. The second is the method responsible to draw the object on the map. The third method is needed to reproject the point from the coordinate space on the map (it's not relevant for the solution of my problem).
My code looks like this:
void onProjection(QGVMap* geoMap)
{
QGVDrawItem::onProjection(geoMap);
mProjPoint = geoMap->getProjection()->geoToProj(mPoint);
}
QPainterPath projShape() const
{
QRectF _bounding = createGlyph().boundingRect();
double _size = fmax(_bounding.height(), _bounding.width());
QPainterPath _bounding_path;
_bounding_path.addRect(0,0,_size,_size);
_bounding_path.translate(mProjPoint.x(), mProjPoint.y());
return _bounding_path;
}
// This function creates the path containing the airplane glyph
// along with its label
QPainterPath createGlyph() const
{
QPainterPath _path;
QPolygon _glyph = QPolygon();
_glyph << QPoint(0,6) << QPoint(0,8) << QPoint(14,6) << QPoint(28,8) << QPoint(28,6) << QPoint(14,0);
_path.addPolygon(_glyph);
_path.setFillRule(Qt::FillRule::OddEvenFill);
_path.addText(OFF_X_TEXT, OFF_Y_TEXT, mFont , QString::number(mId));
QTransform _transform;
_transform.rotate(mHeading);
return _transform.map(_path);
}
// This function is the actual painting method
void drawGlyph(QPainter* painter)
{
painter->setRenderHints(QPainter::Antialiasing, true);
painter->setBrush(QBrush(mColor));
painter->setPen(QPen(QBrush(Qt::black), 1));
QPainterPath _path = createGlyph();
painter->translate(mProjPoint.x(), mProjPoint.y());
painter->drawPath(_path);
}
Of course:
mProjPoint is the position of the airplane,
mHeading is the heading (the direction where the airplane is pointing),
mId is a number identifying the airplane (will be displayed as a label under airplane glyph),
mColor is the color assigned to the airplane.
The problem here is the mix of rotation and translation. Transformation: since the object is rotated, projShape() methods return a bounding rectangle that's not fully overlapping the object drawn on the map...
I also suspect that the center of the object is not correctly pointed on mProjPoint. I tried many times trying to translate the bounding rectangle to center the object without luck.
Another minor issue is the fillup of the font... the label under the airplane glyph is not solid, but it is filled with the same color of the airplane.
How can I fix this?
Generically speaking, the general pattern for rotation is to scale about the origin first and then finish with your final translation.
The following is pseudocode, but it illustrates the need to shift your object's origin to (0, 0) prior to doing any rotation or scaling. After the rotate and scale are done, the object can be moved back from (0, 0) back to where it came from. From here, any post-translation step may be applied.
translate( -origin.x, -origin.y );
rotate( angle );
scale( scale.x, scale y);
translate( origin.x, origin.y );
translate( translation.x, translation.y )
I finally managed to achieve the result I meant....
QPainterPath projShape() const
{
QPainterPath _path;
QRectF _glyph_bounds = _path.boundingRect();
QPainterPath _textpath;
_textpath.addText(0, 0, mFont, QString::number(mId));
QRectF _text_bounds = _textpath.boundingRect();
_textpath.translate(_glyph_bounds.width()/2-_text_bounds.width()/2, _glyph_bounds.height()+_text_bounds.height());
_path.addPath(_textpath);
QTransform _transform;
_transform.translate(mProjPoint.x(),mProjPoint.y());
_transform.rotate(360-mHeading);
_transform.translate(-_path.boundingRect().width()/2, -_path.boundingRect().height()/2);
return _transform.map(_path);
}
void projPaint(QPainter* painter)
{
painter->setRenderHint(QPainter::Antialiasing, true);
painter->setRenderHint(QPainter::TextAntialiasing, true);
painter->setRenderHint(QPainter::SmoothPixmapTransform, true);
painter->setRenderHint(QPainter::HighQualityAntialiasing, true);
painter->setBrush(QBrush(mColor));
painter->setPen(QPen(QBrush(Qt::black), 1));
painter->setFont(mFont);
QPainterPath _path = projShape();
painter->drawPath(_path);
}
Unluckly I still suffer the minor issue with text fill mode:
I would like to have a solid black fill for the text instead of the mColor fill I use for the glyph/polygon.

Finding the Oriented Bounding Box of a Convex Hull in XNA Using Rotating Calipers

Perhaps this is more of a math question than a programming question, but I've been trying to implement the rotating calipers algorithm in XNA.
I've deduced a convex hull from my point set using a monotone chain as detailed on wikipedia.
Now I'm trying to model my algorithm to find the OBB after the one found here:
http://www.cs.purdue.edu/research/technical_reports/1983/TR%2083-463.pdf
However, I don't understand what the DOTPR and CROSSPR methods it mentions on the final page are supposed to return.
I understand how to get the Dot Product of two points and the Cross Product of two points, but it seems these functions are supposed to return the Dot and Cross Products of two edges / line segments. My knowledge of mathematics is admittedly limited but this is my best guess as to what the algorithm is looking for
public static float PolygonCross(List<Vector2> polygon, int indexA, int indexB)
{
var segmentA1 = NextVertice(indexA, polygon) - polygon[indexA];
var segmentB1 = NextVertice(indexB, polygon) - polygon[indexB];
float crossProduct1 = CrossProduct(segmentA1, segmentB1);
return crossProduct1;
}
public static float CrossProduct(Vector2 v1, Vector2 v2)
{
return (v1.X * v2.Y - v1.Y * v2.X);
}
public static float PolygonDot(List<Vector2> polygon, int indexA, int indexB)
{
var segmentA1 = NextVertice(indexA, polygon) - polygon[indexA];
var segmentB1 = NextVertice(indexB, polygon) - polygon[indexB];
float dotProduct = Vector2.Dot(segmentA1, segmentB1);
return dotProduct;
}
However, when I use those methods as directed in this portion of my code...
while (PolygonDot(polygon, i, j) > 0)
{
j = NextIndex(j, polygon);
}
if (i == 0)
{
k = j;
}
while (PolygonCross(polygon, i, k) > 0)
{
k = NextIndex(k, polygon);
}
if (i == 0)
{
m = k;
}
while (PolygonDot(polygon, i, m) < 0)
{
m = NextIndex(m, polygon);
}
..it returns the same index for j, k when I give it a test set of points:
List<Vector2> polygon = new List<Vector2>()
{
new Vector2(0, 138),
new Vector2(1, 138),
new Vector2(150, 110),
new Vector2(199, 68),
new Vector2(204, 63),
new Vector2(131, 0),
new Vector2(129, 0),
new Vector2(115, 14),
new Vector2(0, 138),
};
Note, that I call polygon.Reverse to place these points in Counter-clockwise order as indicated in the technical document from perdue.edu. My algorithm for finding a convex-hull of a point set generates a list of points in counter-clockwise order, but does so assuming y < 0 is higher than y > 0 because when drawing to the screen 0,0 is the top left corner. Reversing the list seems sufficient. I also remove the duplicate point at the end.
After this process, the data becomes:
Vector2(115, 14)
Vector2(129, 0)
Vector2(131, 0)
Vector2(204, 63)
Vector2(199, 68)
Vector2(150, 110)
Vector2(1, 138)
Vector2(0, 138)
This test fails on the first loop when i equals 0 and j equals 3. It finds that the cross-product of the line (115,14) to (204,63) and the line (204,63) to (199,68) is 0. It then find that the dot product of the same lines is also 0, so j and k share the same index.
In contrast, when given this test set:
http://www.wolframalpha.com/input/?i=polygon+%282%2C1%29%2C%281%2C2%29%2C%281%2C3%29%2C%282%2C4%29%2C%284%2C4%29%2C%285%2C3%29%2C%283%2C1%29
My code successfully returns this OBB:
http://www.wolframalpha.com/input/?i=polygon+%282.5%2C0.5%29%2C%280.5%2C2.5%29%2C%283%2C5%29%2C%285%2C3%29
I've read over the C++ algorithm found on http://www.geometrictools.com/LibMathematics/Containment/Wm5ContMinBox2.cpp but I'm too dense to follow it completely. It also appears to be very different than the other one detailed in the paper above.
Does anyone know what step I'm skipping or see some error in my code for finding the dot product and cross product of two line segments? Has anyone successfully implemented this code before in C# and have an example?
Points and vectors as data structures are essentially the same thing; both consist of two floats (or three if you're working in three dimensions). So, when asked to take the dot product of the edges, I suppose it means taking the dot product of the vectors that the edges define. The code you provided does exactly this.
Your implementation of CrossProduct seems correct (see Wolfram MathWorld). However, in PolygonCross and PolygonDot I think you shouldn't normalize the segments. It will affect the magnitude of the return values of PolygonDot and PolygonCross. By removing the superfluous calls to Vector2.Normalize you can speed up your code and reduce the amount of noise in your floating point values. However, normalization is not relevant to the correctness of the code that you have pasted as it only compares the results with zero.
Note that the paper you refer to assumes that the polygon vertices are listed in counterclockwise order (page 5, first paragraph after "Beginning of comments") but your example polygon is defined in clockwise order. That's why PolygonCross(polygon, 0, 1) is negative and you get the same value for j and k.
I assume DOTPR is a normal vector dot product, crosspr is a crossproduct. dotproduct will return a normal number , crossproduct will return a vector which is perpendicular to the two vectors given. (basic vector math,check wikipedia)
they are actually defined in the paper as DOTPR(i,j) returns dotproduct of vectors from vertex i to i+1 and j to j+1. same for CROSSPR but with cross product.

Normal for surface of a cube in XNA

Can anyone please help.
I have a cube which I have made in 3DS Max. I don't know the dimensions of the cube. Is there a way to get the vertices of each of the triangles of the faces of the cube? I am trying to get the normal to one of the faces of the cube to determine which way its pointing. So if I can determine the vertices I can get the normal for the face if I have 3 vertices, V1, V2 and V3, ordered in counterclockwise order, I can obtain the direction of the normal by computing (V2 - V1) x (V3 - V1), where x is the cross product of the two vectors.
I have looked in my models .fbx file and I can see a number of values there:
Vertices: *24 {
a: -15,-12.5,0,15,-12.5,0,-15,12.5,0,15,12.5,0,-15,-12.5,0.5,15,-12.5,0.5,-15,12.5,0.5,15,12.5,0.5}
PolygonVertexIndex: *36 {
a: 0,2,-4,3,1,-1,4,5,-8,7,6,-5,0,1,-6,5,4,-1,1,3,-8,7,5,-2,3,2,-7,6,7,-4,2,0,-5,4,6,-3}
Are these my models vertices?
Also, I would assume that Vertices: * 24 would be my list of vertices, but why is there only 24? Should a cube not have 36 vertices? And finally, if the coordinates for my vertices are PolygonVertexIndex: * 36 these values just seem off to me when I imagine the cube in my head with those dimensions?
Or alternatively, is there a automatic way to get the vertices of a cube without having to manually enter all the values for each vertex? I might have a couple of models to
Any help would be greatly appreciated
I can't figure why you need that... because when you load a model it is calculated , internally each vertex will have the normal,...
Anyway it is easy to calc...
The three first indexes define the first triangle of a face, the next three, the other triangle of a face.
You need only one triangle to calculate the normal...
So with the three indexes access to the veretex array and get three points... A, B and C
Now your normal is the result of the cross product between two vectors formed with that vertex.
Vector3 Normal = Vector3.Cross(B-A, C-B);
If the normal go back or forward will depend on the A,B,C order, can be CounterClockWise or ClockWise, but every triangle of the model will be ordered in one way. So you will have try it and fix it
You can write an XNA program which reads your normals without much hassle.
If you still want to calculate them, however, use this C# code, taken from FFWD, as a guide. Check the URL for a more detailed discussion on pros and cons. Personally, I'm not too happy with the result, but for the time being it works. Of course, since this code is FFWD related (implementation of Unity's API for XNA), it does not match XNA exactly, but the mathematics remain the same.
/// <summary>
/// Recalculates the normals.
/// Implementation adapted from http://devmaster.net/forums/topic/1065-calculating-normals-of-a-mesh/
/// </summary>
public void RecalculateNormals()
{
Vector3[] newNormals = new Vector3[_vertices.Length];
// _triangles is a list of vertex indices,
// with each triplet referencing the three vertices of the corresponding triangle
for (int i = 0; i < _triangles.Length; i = i + 3)
{
Vector3[] v = new Vector3[]
{
_vertices[_triangles[i]],
_vertices[_triangles[i + 1]],
_vertices[_triangles[i + 2]]
};
Vector3 normal = Vector3.Cross(v[1] - v[0], v[2] - v[0]);
for (int j = 0; j < 3; ++j)
{
Vector3 a = v[(j+1) % 3] - v[j];
Vector3 b = v[(j+2) % 3] - v[j];
float weight = (float)Math.Acos(Vector3.Dot(a, b) / (a.magnitude * b.magnitude));
newNormals[_triangles[i + j]] += weight * normal;
}
}
foreach (Vector3 normal in newNormals)
{
normal.Normalize();
}
normals = newNormals;
}

Away 3D Face Link

I'm recently playing with Away3D Library and have a problem in finding Face center in Away3D. Why Away3DLite has a face.center feature while Away3D doesn't have it ? and what is the alternative solution for this ?
If you want to find the center of a face, it's simply the average position of all the vertices making up that face:
function getFaceCenter(f : Face) : Vector3D
{
var vert : Vertex;
var ret : Vector3D = new Vector3D;
for each (vert in f.vertices) {
ret.x += vert.x;
ret.y += vert.y;
ret.z += vert.z;
}
ret.x /= f.vertices.length;
ret.y /= f.vertices.length;
ret.z /= f.vertices.length;
return ret;
}
The above is a very simple function to calculate an average, although on a 3D vector instead of a simple scalar number. That average is the center of all the vertices in the face.
If you need to do this a lot, optimize the method by preventing it from allocating a vector (by passing in a vector to which the return values should be written) and create a temporary variable for the vertex list length instead of dereferencing it through two object references like min (f and vertices), which is unnecessarily heavy.

Encoding CGPoint with NSCoder, with full precision

In my iOS app, I have a shape class, built with CGPoints. I save it to a file using encodeCGPoint:forKey. I read it back in. That all works.
However, the CGPoint values I read in are not exactly equal to the values I saved it. The low bits of the CGFloat values aren't stable. So CGPointEqualToPoint returns NO, which means my isEqual method returns NO. This causes me trouble and pain.
Obviously, serializing floats precisely has been a hassle since the beginning of time. But in this situation, what is the best approach? I can think of several:
write out the x and y values using encodeFloat instead of encodeCGPoint (would that help at all?)
multiply my x and y values by 256.0 before saving them (they're all going to be between -1 and 1, roughly, so this might help?)
write out the x and y values using encodeDouble instead of encodeCGPoint (still might round the lowest bit incorrectly?)
cast to NSUInteger and write them out using encodeInt32 (icky, but it would work, right?)
accept the loss of precision, and implement my isEqual method to use within-epsilon comparison rather than CGPointEqualToPoint (sigh)
EDIT-ADD: So the second half of the problem, which I was leaving out for simplicity, is that I have to implement the hash method for these shape objects.
Hashing floats is also a horrible pain (see " Good way to hash a float vector? "), and it turns out it more or less nullifies my question. The toolkit's encodeCGPoint method rounds its float values in an annoying way -- it's literally printing them to a string with the %g format -- so there's no way I can use it and still make hashing reliable.
Therefore, I'm forced to write my own encodePoint function. As long as I'm doing that, I might as well write one that encodes the value exactly. (Copy two 32-bit floats into a 64-bit integer field, and no, it's not portable, but this is iOS-only and I'm making that tradeoff.)
With reliable exact storage of CGPoints, I can go back to exact comparison and any old hash function I want. Tolerance ranges do nothing for me, so I'm just not using them for this application.
If I wanted hashing and tolerance comparisons, I'd be comparing values within a tolerance of N significant figures, not a fixed distance epsilon. (That is, I'd want 0.123456 to compare close to 0.123457, but I'd also want 1234.56 to compare close to 1234.57.) That would be stable against floating-point math errors, for both large and small values. I don't have sample code for that, but start with the frexpf() function and it shouldn't be too hard.
Directly comparing floating point numbers is usually not the right game plan. Try one of the many other options. The best solution for your problem is probably your last suggestion; I don't know why there's a "sigh" there, though. A double precision floating point number has about 16 decimal digits worth of precision - there's a very good chance that your program doesn't actually need that much precision.
Use the epsilon method, because the "low bits of the CGFloat values aren't stable" problem surfaces any time there's an implicit conversion between float and double (often in framework code. tgmath.h is useful for avoiding this in your own code.)
I use the following functions (the tolerance defaulting to 0.5 because that's useful in the common case for CGGeometry):
BOOL OTValueNearToValueWithTolerance(CGFloat v1, CGFloat v2, CGFloat tolerance)
{
return (fabs(v1 - v2) <= tolerance);
}
BOOL OTPointNearToPointWithTolerance(CGPoint p1, CGPoint p2, CGFloat tolerance)
{
return (OTValueNearToValueWithTolerance(p1.x, p2.x, tolerance) && OTValueNearToValueWithTolerance(p1.y, p2.y, tolerance));
}
BOOL OTSizeNearToSizeWithTolerance(CGSize s1, CGSize s2, CGFloat tolerance)
{
return (OTValueNearToValueWithTolerance(s1.width, s2.width, tolerance) && OTValueNearToValueWithTolerance(s1.height, s2.height, tolerance));
}
BOOL OTRectNearToRectWithTolerance(CGRect r1, CGRect r2, CGFloat tolerance)
{
return (OTPointNearToPointWithTolerance(r1.origin, r2.origin, tolerance) && OTSizeNearToSizeWithTolerance(r1.size, r2.size, tolerance));
}
BOOL OTValueNearToValue(CGFloat v1, CGFloat v2)
{
return OTValueNearToValueWithTolerance(v1, v2, 0.5);
}
BOOL OTPointNearToPoint(CGPoint p1, CGPoint p2)
{
return OTPointNearToPointWithTolerance(p1, p2, 0.5);
}
BOOL OTSizeNearToSize(CGSize s1, CGSize s2)
{
return OTSizeNearToSizeWithTolerance(s1, s2, 0.5);
}
BOOL OTRectNearToRect(CGRect r1, CGRect r2)
{
return OTRectNearToRectWithTolerance(r1, r2, 0.5);
}
BOOL OTPointNearToEdgeOfRect(CGPoint point, CGRect rect, CGFloat amount, CGRectEdge edge)
{
CGRect nearRect, otherRect;
CGRectDivide(rect, &nearRect, &otherRect, amount, edge);
return CGRectContainsPoint(nearRect, point);
}

Resources