I want to ask this question without thinking about a specific technology. Suppose I pull a map tile from any maps provider using my real world location. How can I mark my location on this map tile? What is the calculation used here to convert longitude and latitude to pixels?
I have worked on OpenGL methods to view data on the earth and I think I'd summarize the position process as follows. This is by no mean the only way to do it by hopefully it helps you to think about the problem.
Treat the earth's core as the origin of a sphere, convert all polar coordinate of (latitude, longitude, radius) into (x,y,z) for every map points. Same thing for a particular mark you are interested in.
At this point, you would need to pick a view origin. Say this is your location.
Rotate everything by view origin's negative longitude through z-axis.
Rotate everything by view origin's negative latitude through y-axis.
At this point, the cartesian coordinate of all the points should have view location as the origin. Essentially, you are looking downward to the view origin.
Finally, scale it down and translate so that (x,y) fits in your coordinate system.
Related
I have an image with the worldfile like this:
0.298582141739
0.000000000000
0.000000000000
-0.298582141739
1283836.327077804830
6134835.890168172310
I think it is the projected coordinate system. I want to convert it to geographic coordinate system, because I want to know the latitude/longitude of each pixel of the image.
How to convert that? I check the worldfileread in the matlab, it can read both. But I did not find how to convert.
Could anyone know about that? Thank you.
I'd say you need to know the coordinate system of the map. The world file simply describes a generic affine transformation in the plane. This allows you to convert from image pixels to map coordinates or vice versa. From there to geographic coordinates depends on the map projection.
If you don't know that projection, you need to ask the creator of the image or make some wild guesses. If you know what the image depicts then you can rule out some map projections. For example if the image is of a city in the US, then the UTM cells for France probably won't apply. But chances are that there is a number of rather similar projections and coordinate systems in use for the depicted location, and deciding between these will be tough.
One thing you can already rule out: the world file does not directly map to geographic coordinates measured in degrees (in a equirectangular projection). That's because the upper left corner of the image would have an easting of 1283836.327077804830 and a northing of 6134835.890168172310 which doesn't make sense as degrees. It could be meters or something like that, but I doubt it: The first and fourth line have the same absolute value, so your pixels are square. So unless your picture displays something close to the equator or your picture looks highly distorted in terms of horizontal vs. vertical extents, I'd say this is not a geographic coordinate system.
What is best strategy to recreate part of a street in iOS SceneKit using .osm XML data?
Please assume part of a street is offered in the OSM XML data and contains the necessary geopoints with latitude and longitude denoting the Nodes to describe the paths/footprints of 6 buildings (i.e. ground floor plans that line the side of a street).
Specifically, what's the best strategy to convert latitude and longitude Nodes in order to locate these building footprints/polygons on the ground floor in a scene within SceneKit iOS? (i.e. running through position 0,0,0)? Thank you.
Very roughly and briefly, based on my own experience with 3D map rendering:
Transform the XML data from lat/long to appropriate coordinates for a 2D map (that is, project it to a plane using a map projection, then apply a 2D affine transform to get it into screen pixel coordinates). Create a 2D map that's wider and taller than the actual screen, because of what's going to happen in step 2:
Using a 3D coordinate system with your map vertical (i.e., set all the Z coordinates to zero), rotate the map so that it reclines at an appropriate shallow angle, as if you're in an aeroplane looking down on it; the angle might be 30 degrees from horizontal. To rotate the map you'll need to create a 3D rotation matrix. The axis of rotation will be the X axis: that is, the horizontal line that is the bottom border of your 2D map. The rotation is exactly the same as what happens when you rotate your laptop screen away from you.
Supply the new 3D coordinates to your rendering system. I haven't used SceneKit but I had a quick look at the documentation and you can use any coordinate system you like, so you will be able to use one that is convenient for the process I have just described: something that uses units the size of a screen pixel at the viewing plane, with Y going upwards, X going right, and Z going away from the viewer.
One final caveat: if you want to add extrusions giving a rough approximation of the 3D building shapes (such data is available in OSM for some areas) note that my scheme requires the tops of buildings, and indeed anything above ground level, to have negative Z coordinates.
Pretty simple. First, convert Your CLLocationCoordinate2D to a MKMapPoint, which is exactly the same as a CGRect. Second, scale down the MKMapPoint by some arbitrary number so it fits in with how you want it on your scene graph, let's say by 200. Since scenekit's coordinate system is centered at (0,0), you'll need to make sure your location is correct. Then just create your scnvector3's with the x/y of he MKMapPoint, and you will be locked to coordinates.
I'm currently developing a small piece of (Java) software that should be able to display maps and the current GPS position within that map.
I'm absolutely new to this, but it is pretty obvious that I'll have to do some kind of coordinate transformation.
I've found "Proj4J", which seems to be able to do a lot for me.
Now, what I have and what I want to do:
I have a bitmap of a map. The projection of this map can be any "well-defined" one, like Lambert or Mercator. I cannot fix this to one projection.
I have GPS coordinates from a "standard" GPS receiver. I believe they are lat/lon in WGS84, is that correct?
Now my questions:
I must map the GPS position to basically "screen coordinates" in my map bitmap. And for that, I assume, reference points are needed for which I know their lat/lon and corresponding pixel positions. Since my map can easily cover a couple of hundred kilometers in range, a linear interpolation between the known points and an arbitrary position is probably not correct for all types of projections, am I right on that?
I've read "Convert long/lat to pixel x/y on a given picure" so far, but this deals with a Mercator projection and I believe a linear approximation will work better than for a Lambert map.
I imagine the whole process is as follows:
"Calibrate" the map, i. e. identify two positions of known lat/lon in the bitmap and thus get their pixel position.
Use the Proj.4-transformation from "lat/lon WGS84" to "map projection" to map those reference points from (1.) into map coordinates.
Take the points from (2.) and map them again to a projection that will allow linear interpolation of the pixel positions, I'll call that the "pixel projection".
Now I have two reference points with coordinates in the "pixel projection" and their corresponding pixel positions.
For a lat/lon value from the GPS receiver do the following:
Convert the position to a map position using the "map projection".
Take the map position from (1.) and convert it to a coordinate using the "pixel projection" from above.
Since all distances in the "pixel projection" are maintained (that is the condition of the pixel projection!), an interpolation of the resulting coordinates from (2.) with the known position of the reference points from above can be made.
Here the big questions:
Is this the way to go, using a final "pixel projection" to allow linear interpolation?
What type of projection would that be and can that be done with Proj.4?
Can the "way back" - I have a pixel position and want lat/lon be accomplished (like "pixel position" -> "pixel projection" -> "map projection" -> "lat/lon")?
Thank you very much,
Jens.
I know that Posit calculates the translation and rotation between your camera and a 3d object.
the only problem i have right now is, i have no idea how the coordinate systems of the camera and the object are defined.
So for example if i get 90° around the z-axis, in which direction is the z axis pointing and is the object rotating around this axis or is the camera rotating around it?
Edit:
After some testing and playing around with different coordinate systems, i think this is right:
definition of the camera coordinate system:
z-axis is pointing in the direction, in which the camera is looking.
x-axis is pointing to the right, while looking in z-direction.
y-axis is pointing up, while looking in z-direction.
the object is defined in the same coordinate system, but each point is defined relative to the starting point and not to the coordinate systems origin.
the translation vector you get, tells you how point[0] of the object is moved away from the origin of the camera coordinate system.
the rotationmatrix tells you how to rotate the object in the cameras coordinate system, in order to get the objects starting orientation. so the rotation matrix basically doesnt tell you how the object is rotated right now, but it tells you how you have to reverse its current orientation.
can anyone approve this?
Check out this answer.
The Y axis is pointing downward. I don't know what do You mean by starting point. The camera lays in the origin of it's coordinate system, and object points are defined in this system.
You are right with the rotation matrix, well, half of. The rotation matrix tells You, how to rotate the coordinate system to make it oriented the same as the coordinate system used to define model of the object. So it does tell You how the object is oriented with respect to the camera coordinate system.
I have a series of lat/lon which represents the center of some object. I need to draw a line through this point that is x meters on either side of the center and it needs to be perpendicular to the heading (imagine a capital T)
Ultimately I want to get the lat/lon of this line's endpoints.
Thanks!
The basic calculation is in this similar question's answer: Calculate second point knowing the starting point and distance. Calculate the points for the two headings perpendicular to the main heading the distance away you want.
Have a look at: Core Location extensions for bearing and distance
With those extensions and two points on the initial line you should be able to get the bearing, add/subtract pi/2 and find points to either side like this:
double bearing = [bottomOfT bearingInRadiansTowardsLocation:topOfT];
CLLocation *left = [topOfT newLocationAtDistance:meters
alongBearingradians:bearing+M_PI/2];
CLLocation *right = [topOfT newLocationAtDistance:meters
alongBearingradians:bearing-M_PI/2];