Can any one help me , How do I implement the area calculator using in group of latitude and longitudes .
For Example One person walk around the building , I am getting the latitude and longitudes while walking time . I want calculate the how much square feet that building was constructed.
Well, I´m going to try to help you, but I´m not going to give the complete answer.
I think, the first step is convert lats and longs in a cartesian coordinate system. You should calculate the center of all points. (A simple median).
Second step, convert all points to ENU coordinates centered in its center:
This step, I did, and here you are:
Constants:
#define DEGREES_TO_RADIANS (M_PI/180.0)
#define WGS84_A (6378137.0) // WGS 84 semi-major axis constant in meters
#define WGS84_E (8.1819190842622e-2) // WGS 84 eccentricity
Structs:
//To change to ECEF
typedef struct{
double x;
double y;
double z;
} ECEFCoordinate;
typedef struct{
double east;
double north;
double up;
} ENUCoordinate;
Methods, (you need past through ECEF):
#pragma mark Geodetic utilities definition
-(ECEFCoordinate) ecefFromLatitude:(double)lat longitude:(double)lon andAltitude:(double)alt
{
double clat = cos(lat * DEGREES_TO_RADIANS);
double slat = sin(lat * DEGREES_TO_RADIANS);
double clon = cos(lon * DEGREES_TO_RADIANS);
double slon = sin(lon * DEGREES_TO_RADIANS);
double N = WGS84_A / sqrt(1.0 - WGS84_E * WGS84_E * slat * slat);
ECEFCoordinate ecef;
ecef.x = (N + alt) * clat * clon;
ecef.y = (N + alt) * clat * slon;
ecef.z = (N * (1.0 - WGS84_E * WGS84_E) + alt) * slat;
return ecef;
}
// Converts ECEF to ENU coordinates centered at given lat , lon (with ECEFCenter)
-(ENUCoordinate)enuFromECEFCenter:(ECEFCoordinate)ecefCenter withLat:(double)lat andLon:(double)lon fromEcef:(ECEFCoordinate)ecef
{
double clat = cos(lat * DEGREES_TO_RADIANS);
double slat = sin(lat * DEGREES_TO_RADIANS);
double clon = cos(lon * DEGREES_TO_RADIANS);
double slon = sin(lon * DEGREES_TO_RADIANS);
double dx = ecefCenter.x - ecef.x;
double dy = ecefCenter.y - ecef.y;
double dz = ecefCenter.z - ecef.z;
ENUCoordinate enu;
enu.east = -slon*dx + clon*dy;
enu.north = -slat*clon*dx - slat*slon*dy + clat*dz;
enu.up = clat*clon*dx + clat*slon*dy + slat*dz;
return enu;
}
Last step: (I think the easy way is use triangles, from center to two consecutive points), calculate the area of a bunch of points in a cartesian coordinate system (east,north). Same than (x,y).
Good luck.
Last help two calculate the Area, I think you can find more help (and maybe best way) trough internet.
Related
Firstly, I am not using 3Js in my Orbits app because I encountered a number of limitations including, but not limited to, issues with texture resolution and my requirement for complex lighting equations but I would like to implement something like 3Js' raycaster to allow me to detect the object clicked by the user.
I'm new to WebGL, but an "old hand" in software development so I'm looking for some hints about where to start.
The approach is as follows:
You generate your scene twice, once normally which is displayed and the second, with the objects uniquely coloured but not displayed. Then you use gl.readPixels from the second scene using the position on the first and decode the colour to identify the object.
Now I have to implement it myself.
Picking spheres
When picking spheres, or objects that are separated (not one inside another) you can use a simple distance from ray to very quickly get the closest object.
Example
The function returns a function that does the calculation. As it is only the closest you are interested in the distances can remain as squares. The distance from the camera is held as a unit distance along the ray.
function distanceFromRay() {
var dSqr, ox, oy, oz, vx, vy, vz;
function distanceSqr(px, py, pz) {
const ax = px - ox, ay = py - oy, az = pz - oz;
const u = (ax * vx + ay * vy + az * vz) / dSqr;
distanceSqr.unit = u;
if (u > 0) { // is past origin
const bx = ox + vx * u - px, by = oy + vy * u - py, bz = oz + vz * u - pz;
return bx * bx + by * by + bz * bz; // dist sqr to closest point on ray
}
return Infinity;
}
distanceSqr.unit = 0;
distanceSqr.setRay(x, y, z, xx, yy, zz) { // ray from origin x, y,z,
// infinite length along xx,yy,zz
(ox = x, oy = y, oz = z);
(vx = xx, vy = yy, vz = zz);
dSqr = vx * vx + vy * vy + vz * vz;
}
return distanceSqr;
}
Usage
There is a one time setup call;
// setup
const distToRay = distanceFromRay();
At the start of a frame that requires a pick, calculate the pick ray and set it. Also set the min distance from ray and eye.
// at start of frame set pick ray
distToRay.setRay(eye.x, eye.y, eye.z, pointer.ray.x, pointer.ray.y, pointer.ray.y);
var minDist = maxObjRadius * maxObjRadius;
var nearestObj = undefined;
var eyeDist = Infinity;
Then for each pickable object get the distance by passing the objects center and comparing it to any previous (in frame) found distance, objects radius, and distance from eye.
// per object
const dis = distToRay(obj.pos.x, obj.pos.y, obj.pos.z);
if (dis < obj.radius && dis < minDist && distToRay.unit > 0 && distToRay.unit < eyeDist ) {
minDist = dis;
eyeDist = distToRay.unit;
nearestObj = obj;
}
At the end of the frame if nearestObj is not undefined it will hold a reference to the picked object.
// end of frame
if (nearestObj) {
// you have the closest object
}
I'm building a ray tracer as an assignment. I'm trying to get refraction working for spheres and I got it half-working. The problem is I can't get rid of the black dot in the centre of the sphere
This is the code for the intersection:
double a = rayDirection.DotProduct(rayDirection);
double b = rayOrigin.VectAdd(sphereCenter.Negative()).VectMult(2).DotProduct(rayDirection);
double c = rayOrigin.VectAdd(sphereCenter.Negative()).DotProduct(rayOrigin.VectAdd(sphereCenter.Negative())) - (radius * radius);
double discriminant = b * b - 4 * a * c;
if (discriminant >= 0)
{
// the ray intersects the sphere
// the first root
double root1 = ((-1 * b - sqrt(discriminant)) / 2.0 * a) - 0.000001;
double root2 = ((-1 * b + sqrt(discriminant)) / 2.0 * a) - 0.000001;
if (root1 > 0.00001)
{
// the first root is the smallest positive root
return root1;
}
else
{
// the second root is the smallest positive root
return root2;
}
}
else
{
// the ray missed the sphere
return -1;
}
This is the code responsible for computing the direction of the new refracted ray:
double n1 = refractionRay.GetRefractiveIndex();
double n2 = sceneObjects.at(indexOfWinningObject)->GetMaterial().GetRefractiveIndex();
if (n1 == n2)
{
// ray inside the same material, means that it is going to be refracted outside,
n2 = 1.000293;
}
double n = n1 / n2;
Vect I = refractionRay.GetRayDirection();
Vect N = sceneObjects.at(indexOfWinningObject)->GetNormalAt(intersectionPosition);
double cosTheta1 = -N.DotProduct(I);
// we need the normal pointing towards the side the ray is coming from
if (cosTheta1 < 0)
{
N = N.Negative();
cosTheta1 = -N.DotProduct(I);
}
double cosTheta2 = sqrt(1 - (n * n) * (1 - (cosTheta1 * cosTheta1)));
Vect refractionDirection = I.VectMult(n).VectAdd(N.VectMult(n * cosTheta1 - cosTheta2));
Ray newRefractionRay(intersectionPosition.VectAdd(refractionDirection.VectMult(0.001)), refractionDirection, n2, refractionRay.GetRemainingIntersections());
When creating the new refracting ray, I tried adding the direction times a small value to the intersection position to make the origin of this new ray inside the sphere. The size of the black dot changes if I change that small value. If I make it too big the margins of the sphere start turning black as well.
If I add colour to the object it looks like this:
And if make that small constant bigger (0.1) this happens:
Is there a special condition I should take into account? Thank you!
You should remove the epsilon factors that you subtract when you calculate the two roots:
double root1 = ((-1 * b - sqrt(discriminant)) / 2.0 * a);
double root2 = ((-1 * b + sqrt(discriminant)) / 2.0 * a);
In my experience the only place you need a comparison against epsilon is when checking whether the found root is along the path of the ray and not at its origin, per your:
if (root1 > 0.00001)
NB: you could eke out a little more performance by only doing the square root calculation once, and also by only calculating root2 if root1 <= epsilon
What is the ratio between meters and points in sprite kit?
'Cause on the apple documentation it says that the speeds and accelerations are in meter/sec - meters/sec^2, but it doesn't give a conversion in points/sec etc..
I tried measuring the speed of an object either with apple's sprites velocity attribute, either doing manually the calculation of the points per second, and I came up at a ca 1:1 ratio, meaning 1m/s = 1point/s.
Now, can anyone confirm that? or am I completely wrong?
Here is the code I used for the calculation:
double dt = currentTime - previousTime;
previousTime = currentTime;
double x = ball.physicsBody.velocity.dx;
double y = ball.physicsBody.velocity.dy;
double mod = sqrt(x*x+y*y);
double x2 = (ball.position.x-previousPosition.x)/dt;
double y2 = (ball.position.y-previousPosition.y)/dt;
double mod2 = sqrt(x2*x2+y2*y2);
if (mod2!=0){totalSpeed = totalSpeed + mod2;
j++;}
double mod3 = totalSpeed/j;
NSLog(#"Ball Speed: %.2f - %.2f - %.2f",mod,mod2,mod3);
previousPosition = ball.position;
In my application, a user taps 3 times and an angle will be created by the 3 points that were tapped. It draws the angle perfectly. I am trying to calculate the angle at the second tap, but I think I am doing it wrong (probably a math error). I haven't covered this in my calculus class yet, so I am going off of a formula on wikipedia.
http://en.wikipedia.org/wiki/Law_of_cosines
Here is what I am trying:
Note: First, Second, and Third are CGPoints created at the user's tap.
CGFloat xDistA = (second.x - third.x);
CGFloat yDistA = (second.y - third.y);
CGFloat a = sqrt((xDistA * xDistA) + (yDistA * yDistA));
CGFloat xDistB = (first.x - third.x);
CGFloat yDistB = (first.y - third.y);
CGFloat b = sqrt((xDistB * xDistB) + (yDistB * yDistB));
CGFloat xDistC = (second.x - first.x);
CGFloat yDistC = (second.y - first.y);
CGFloat c = sqrt((xDistC * xDistC) + (yDistC * yDistC));
CGFloat angle = acos(((a*a)+(b*b)-(c*c))/((2*(a)*(b))));
NSLog(#"FULL ANGLE IS: %f, ANGLE IS: %.2f",angle, angle);
Sometimes, it gives the angle as 1 which doesn't make sense to me. Can anyone explain why this is, or how to fix it please?
Not sure if this is the main problem but it is a problem
Your answer gives the angle at the wrong point:
To get the angle in green (which is probably angle you want based on your variable names "first", "second" and "third), use:
CGFloat angle = acos(((a*a)+(c*c)-(b*b))/((2*(a)*(c))));
Here's a way that circumvents the law of cosines and instead calculates the angles of the two vectors. The difference between the angles is the searched value:
CGVector vec1 = { first.x - second.x, first.y - second.y };
CGVector vec2 = { third.x - second.x, third.y - second.y };
CGFloat theta1 = atan2f(vec1.dy, vec1.dx);
CGFloat theta2 = atan2f(vec2.dy, vec2.dx);
CGFloat angle = theta1 - theta2;
NSLog(#"angle: %.1f°, ", angle / M_PI * 180);
Note the atan2 function that takes the x and y components as separate arguments and thus avoids the 0/90/180/270° ambiguity.
The cosine formula implementation looks right; did you take into account that acos() returns the angle in radians, not in degrees? In order to convert into degrees, multiply the angle by 180 and divide by Pi (3.14159...).
The way I have done it is to calculate the two angles separately using atan2(y,x) then using this function.
static inline double
AngleDiff(const double Angle1, const double Angle2)
{
double diff = 0;
diff = fabs(Angle1 - Angle2);
if (diff > <Pi>) {
diff = (<2Pi>) - diff;
}
return diff;
}
The function deals in radians, but you can change <Pi> to 180 and <2Pi> to 360
Using this answer to compute angle of the vector:
CGFloat angleForVector(CGFloat dx, CGFloat dy) {
return atan2(dx, -dy) * 180.0/M_PI;
}
// Compute angle at point Corner, that is between AC and BC:
CGFloat angle = angleForVector(A.x - Corner.x, A.y - Corner.y)
- angleForVector(B.x - Corner.x, B.y - Corner.y);
NSLog(#"FULL ANGLE IS: %f, ANGLE IS: %.2f",angle, angle);
i'm trying to run Madgwick's sensor fusion algorithm on iOS. Since the code is open source i already included it in my project and call the methods with the provided sensor values.
But it seems, that the algorithm expects the sensor measurements in a different coordinate system. The Apple CoreMotion Sensor System is given on the right side, Madgewick's on the left. Here is the picture of the different coordinate systems. Both systems follow the right hand rule.
For me it seems like there is a 90 degree rotation around the z axis. But this didn't work.
I also tried to flip x and y (and invert z) axis as suggested by other stackoverflow posts for WP but this didn't work also. So do you have a hint?
Would be perfect if Madgwick's alogithm output could be in the same system as the CoreMotion output (CMAttitudeReferenceFrameXMagneticNorthZVertical).
Furthermore I'm looking for a good working value for betaDef on the iPhone. betaDef is kind of the proportional gain and is currently set to 0.1f.
Any help on how to achieve the goal would be appreciated.
I'm not sure how to write this in objective c, but here's how I accomplished the coordinate transformations in vanilla c. I also wanted to rotate the orientation so that +y is north. This translation is also reflected in the below method.
This method expects a 4 element quaternion in the form of wxyz, and returns a translated quaternion in the same format:
void madgeq_to_openglq(float *fMadgQ, float *fRetQ) {
float fTmpQ[4];
// Rotate around Z-axis, 90 degres:
float fXYRotationQ[4] = { sqrt(0.5), 0, 0, -1.0*sqrt(0.5) };
// Inverse the rotation vectors to accomodate handedness-issues:
fTmpQ[0] = fMadgQ[0];
fTmpQ[1] = fMadgQ[1] * -1.0f;
fTmpQ[2] = fMadgQ[2];
fTmpQ[3] = fMadgQ[3] * -1.0f;
// And then store the translated Rotation into ret:
quatMult((float *) &fTmpQ, (float *) &fXYRotationQ, fRetQ);
}
// Quaternion Multiplication operator. Expects its 4-element arrays in wxyz order
void quatMult(float *a, float *b, float *ret) {
ret[0] = (b[0] * a[0]) - (b[1] * a[1]) - (b[2] * a[2]) - (b[3] * a[3]);
ret[1] = (b[0] * a[1]) + (b[1] * a[0]) + (b[2] * a[3]) - (b[3] * a[2]);
ret[2] = (b[0] * a[2]) + (b[2] * a[0]) + (b[3] * a[1]) - (b[1] * a[3]);
ret[3] = (b[0] * a[3]) + (b[3] * a[0]) + (b[1] * a[2]) - (b[2] * a[1]);
return;
}
Hope that helps!