I have done coding of Trilateration following this example and it was really very helpful. One small help I need is. The results (X, Y, Z) returned here are based in Earth Coordinate Systems. I need to convert it into local coordinate system to place it in a map. I have found some suggestion of converting to local coordinate from lat long(For example : this). But I was just wondering is it possible to change the coordinate directly from the Earth coordinate system?
Yes, assuming your local coordinates are East, North, Up (ENU):
http://en.wikipedia.org/wiki/Geodetic_system#From_ECEF_to_ENU
Related
I have issue I am tryin to sort.
So basically I have a coordinate system where +x = traveling East, +y is up, and +z is traveling North. Effectively I have taken Lat/Long and projected it to OSGB.
I have a list of points in this coordinate system, but when I render them they are flipped on one axis The Z(North) axis so my point list looks incorrect.This is because the rendering API has the +z axis running the other way.
I was thinking my solution to this could be, have all my objects/3d models/points etc drawn in my "Real World" coordinate system, then at the last moment before I render then apply a Scale Matrix (1,1,-1) to each of the world matrices so that the Z Axis is flipped on everything.
So if my real world projected coordinate is: 281852; 161.488; 655844
After I apply my "RealWorldToXNA" matrix, the point will be 281852; 161.488; -655844;
I will then apply the same thing to my camera so it renders from the correct position.
Will this work or am I missing something? I haven't done a lot of 3d maths lately and have suffered a bout of cerebral flatulence. Part of my brain thinks this will work, but another part thinks it shouldn't be so simple.
FYI I used the solution in my question - just tested it and it did in-fact work as expected.
I want to ask this question without thinking about a specific technology. Suppose I pull a map tile from any maps provider using my real world location. How can I mark my location on this map tile? What is the calculation used here to convert longitude and latitude to pixels?
I have worked on OpenGL methods to view data on the earth and I think I'd summarize the position process as follows. This is by no mean the only way to do it by hopefully it helps you to think about the problem.
Treat the earth's core as the origin of a sphere, convert all polar coordinate of (latitude, longitude, radius) into (x,y,z) for every map points. Same thing for a particular mark you are interested in.
At this point, you would need to pick a view origin. Say this is your location.
Rotate everything by view origin's negative longitude through z-axis.
Rotate everything by view origin's negative latitude through y-axis.
At this point, the cartesian coordinate of all the points should have view location as the origin. Essentially, you are looking downward to the view origin.
Finally, scale it down and translate so that (x,y) fits in your coordinate system.
I'm trying to get the 3D coordinate of a point from the triangulation of two view.
I'm not really sure if my results are correct or not.
For example I'm not really sure if the sign of the coordinates are correct because I'm not sure how the camera frame is oriented.
The z axis positive verse is entering or exiting the image plane?
And x and y? Do they follow the right hand rule?
Thanks in advance for a clarification!
The coordinate system is set according to the image and the description on this webpage
I know that Posit calculates the translation and rotation between your camera and a 3d object.
the only problem i have right now is, i have no idea how the coordinate systems of the camera and the object are defined.
So for example if i get 90° around the z-axis, in which direction is the z axis pointing and is the object rotating around this axis or is the camera rotating around it?
Edit:
After some testing and playing around with different coordinate systems, i think this is right:
definition of the camera coordinate system:
z-axis is pointing in the direction, in which the camera is looking.
x-axis is pointing to the right, while looking in z-direction.
y-axis is pointing up, while looking in z-direction.
the object is defined in the same coordinate system, but each point is defined relative to the starting point and not to the coordinate systems origin.
the translation vector you get, tells you how point[0] of the object is moved away from the origin of the camera coordinate system.
the rotationmatrix tells you how to rotate the object in the cameras coordinate system, in order to get the objects starting orientation. so the rotation matrix basically doesnt tell you how the object is rotated right now, but it tells you how you have to reverse its current orientation.
can anyone approve this?
Check out this answer.
The Y axis is pointing downward. I don't know what do You mean by starting point. The camera lays in the origin of it's coordinate system, and object points are defined in this system.
You are right with the rotation matrix, well, half of. The rotation matrix tells You, how to rotate the coordinate system to make it oriented the same as the coordinate system used to define model of the object. So it does tell You how the object is oriented with respect to the camera coordinate system.
I have a set of x,y coordinates denoting the outlines of the continents in a Miller Projection in MATLAB. I'm trying to figure out the MATLAB mapping toolbox and specifically projinv. The function takes a set of (x,y) coordinates and a projection, and then transforms them into a set of longitude and latitudes.
What I'm confused on is what the units on the (x,y) coordinates should be. The example in the docs seems to convert them into Survey Feet, but I can't find any documentation on how to properly scale the input mapping.
Any suggestions?
Not quite sure what you're asking here...
The Boston roads example in the docs convert units from survey feet into meters, and then passes them into projinv. Thus, (x,y) should be in meters.
Is this what you mean?