Tilt centerline of projection - dart

In my Flutter app, I am using proj4dart. I want to transform some coordinates from the normal WGS84 projection to a slightly changed one. The only thing I want to change is where the north is, e.g. 30° further to the east.
+proj=longlat +datum=WGS84 +no_defs +type=crs
This is the default proj4 definition of WGS84. How can I change this projection as described above?

Related

How to calculate translation matrix?

I have 2D image data with respective camera location in latitude and longitude. I want to translate pixel co-ordinates to 3D world co-ordinates. I have access to intrinsic calibration parameters and Yaw, pitch and roll. Using Yaw, pitch and roll I can derive rotation matrix but I am not getting how to calculate translation matrix. As I am working on data set, I don't have access to camera physically. Please help me to derive translation matrix.
Cannot be done at all if you don't have the elevation of the camera with respect to the ground (AGL or ASL) or another way to resolve the scale from the image (e.g. by identifying in the image an object of known size, for example a soccer stadium in an aerial image).
Assuming you can resolve the scale, the next question is how precisely you can (or want to) model the terrain. For a first approximation you can use a standard geodetical ellipsoid (e.g. WGS-84). For higher precision - especially for images shot from lower altitudes - you will need use a DTM and register it to the images. Either way, it is a standard back-projection problem: you compute the ray from the camera centre to the pixel, transform it into world coordinates, then intersect with the ellipsoid or DTM.
There are plenty of open source libraries to help you do that in various languages (e.g GeographicLib)
Edited to add suggestions:
Express your camera location in ECEF.
Transform the ray from the camera in ECEF as well taking into account the camera rotation. You can both transformations using a library, e.g. nVector.
Then proceeed to intersect the ray with the ellipsoid, as explained in this answer.

Aruco scales coordinates wrong

I am using the (newly released) ArUco 2.0.7 to track some markers.
The camera that I am using is mounted to the ceiling facing down, so I only need the x and y coordinates.
It can view an area of 2.6m by 1.5m. If I understand the documentation correctly, I supply the sidelength of the markers I'm using in an arbitrary unit, the output of the pose will be in the same unit.
So the markers have a sidelength of 19.5cm. As I want my result in meters, I have that value set to 0.195.
However, the results I obtain are not correct. If I place the markers right in the corners of the field of view of the camera, they are not at the corresponding expected x and y coordinates.
I am placing the global origin on one of the corners of the field of view, e.g. (0,0) is the bottom left corner. This is done by transforming all incoming positions into that markers coordinate system using the matrix transforms obtained by getRTMatrix().
Everything seems to be working, except the x and y coordinates are in a wrong unit or scaled. The rotation works perfectly.
Am I missing something? Or can I not expect a good accuracy? The error is significant, e.g. when it should be (2.6,1.5), it is displayed as (1.8, 1), which is roughly an error of 33%.
After some more thought I figured out that simply my camera was calibrated using a smaller distance from the calibration board to the lens than what I need for my use case.
This caused the distortion coefficients the be wrong, thus giving me a bogus scale.
I re-calibrated using the aruco_calibration tool and am now accurate to roughly 3 or 4 cm, which is good enough for me.

Recreate the 3D outlines of a City street in iOS SceneKit with OSM XML data

What is best strategy to recreate part of a street in iOS SceneKit using .osm XML data?
Please assume part of a street is offered in the OSM XML data and contains the necessary geopoints with latitude and longitude denoting the Nodes to describe the paths/footprints of 6 buildings (i.e. ground floor plans that line the side of a street).
Specifically, what's the best strategy to convert latitude and longitude Nodes in order to locate these building footprints/polygons on the ground floor in a scene within SceneKit iOS? (i.e. running through position 0,0,0)? Thank you.
Very roughly and briefly, based on my own experience with 3D map rendering:
Transform the XML data from lat/long to appropriate coordinates for a 2D map (that is, project it to a plane using a map projection, then apply a 2D affine transform to get it into screen pixel coordinates). Create a 2D map that's wider and taller than the actual screen, because of what's going to happen in step 2:
Using a 3D coordinate system with your map vertical (i.e., set all the Z coordinates to zero), rotate the map so that it reclines at an appropriate shallow angle, as if you're in an aeroplane looking down on it; the angle might be 30 degrees from horizontal. To rotate the map you'll need to create a 3D rotation matrix. The axis of rotation will be the X axis: that is, the horizontal line that is the bottom border of your 2D map. The rotation is exactly the same as what happens when you rotate your laptop screen away from you.
Supply the new 3D coordinates to your rendering system. I haven't used SceneKit but I had a quick look at the documentation and you can use any coordinate system you like, so you will be able to use one that is convenient for the process I have just described: something that uses units the size of a screen pixel at the viewing plane, with Y going upwards, X going right, and Z going away from the viewer.
One final caveat: if you want to add extrusions giving a rough approximation of the 3D building shapes (such data is available in OSM for some areas) note that my scheme requires the tops of buildings, and indeed anything above ground level, to have negative Z coordinates.
Pretty simple. First, convert Your CLLocationCoordinate2D to a MKMapPoint, which is exactly the same as a CGRect. Second, scale down the MKMapPoint by some arbitrary number so it fits in with how you want it on your scene graph, let's say by 200. Since scenekit's coordinate system is centered at (0,0), you'll need to make sure your location is correct. Then just create your scnvector3's with the x/y of he MKMapPoint, and you will be locked to coordinates.

Mapping lat/lon coordinates to a bitmap image of a map, not fixed to one projection

I'm currently developing a small piece of (Java) software that should be able to display maps and the current GPS position within that map.
I'm absolutely new to this, but it is pretty obvious that I'll have to do some kind of coordinate transformation.
I've found "Proj4J", which seems to be able to do a lot for me.
Now, what I have and what I want to do:
I have a bitmap of a map. The projection of this map can be any "well-defined" one, like Lambert or Mercator. I cannot fix this to one projection.
I have GPS coordinates from a "standard" GPS receiver. I believe they are lat/lon in WGS84, is that correct?
Now my questions:
I must map the GPS position to basically "screen coordinates" in my map bitmap. And for that, I assume, reference points are needed for which I know their lat/lon and corresponding pixel positions. Since my map can easily cover a couple of hundred kilometers in range, a linear interpolation between the known points and an arbitrary position is probably not correct for all types of projections, am I right on that?
I've read "Convert long/lat to pixel x/y on a given picure" so far, but this deals with a Mercator projection and I believe a linear approximation will work better than for a Lambert map.
I imagine the whole process is as follows:
"Calibrate" the map, i. e. identify two positions of known lat/lon in the bitmap and thus get their pixel position.
Use the Proj.4-transformation from "lat/lon WGS84" to "map projection" to map those reference points from (1.) into map coordinates.
Take the points from (2.) and map them again to a projection that will allow linear interpolation of the pixel positions, I'll call that the "pixel projection".
Now I have two reference points with coordinates in the "pixel projection" and their corresponding pixel positions.
For a lat/lon value from the GPS receiver do the following:
Convert the position to a map position using the "map projection".
Take the map position from (1.) and convert it to a coordinate using the "pixel projection" from above.
Since all distances in the "pixel projection" are maintained (that is the condition of the pixel projection!), an interpolation of the resulting coordinates from (2.) with the known position of the reference points from above can be made.
Here the big questions:
Is this the way to go, using a final "pixel projection" to allow linear interpolation?
What type of projection would that be and can that be done with Proj.4?
Can the "way back" - I have a pixel position and want lat/lon be accomplished (like "pixel position" -> "pixel projection" -> "map projection" -> "lat/lon")?
Thank you very much,
Jens.

Marker Tracking + perspective warp of marker

I'm tracking a marker with ARToolKit+. I receive a model view matrix that looks about right. Now I'd like to warp the image in a way that the marker looks just like it would look if I looked straight at it. But whatever I do, the result looks just extremely distorted. I know that ARToolKit stores the 4x4 matrix in column major order, so I fixed that for OpenCV.
What I tried so far was:
1) fix the order to row major order
2) calculate the inverse with cvInverse (although transposing the 3x3 rotation part + inverting the translation should suffice)
3) use that matrix with cvPerspectiveWarp
Am I doing something wrong?
tl;dr:
I want this: https://www.youtube.com/watch?v=qZ-LU-C2p2Q
I get some distorted lines and lots of black instead.
Your problem is in converting from 4x4 to 3x3. The short answer is that you want to drop the 3rd column and bottom row to make the 3x3 and then premultiply with your camera matrix. For a longer explanation see here
Clarification
The pose you get from ARTK represents a transform from one place to another. When I say "the initial image appears without rotation" I meant that your transform goes from an initial state which has no rotation about the x or y axis to the current state. That is a fine assumption for most augmented reality applications, I mentioned it just to be thorough.
As for why you can drop the 3rd column. Since you are transforming a plane, your z coordinate can be completely expressed by your x and y coordinates given the equation of your plane. If we assume that initially there is no rotation then your initial z coordinate is a constant value. If there is rotation then z is not constant but it varies deterministically in x and y according to its plane equation which can still be expressed in one matrix (though you don't need that). Since in your case your 4x4 transform is probably expressing the transform from the marker lying flat at z = 0 to its current position, the 3rd column of your 4x4 matrix does nothing (it all gets multiplied by 0) so it can be dropped without affecting the result.
In short: Forget about the rotation stuff, its more complicated than you need, just realize that the transform is from initial coordinates to final coordinates and your initial coordinates are always
[x,y,0,1]
which makes your third column irrelevant.
Update
I'm sorry! I just re-read your question and realized you just want to warp the marker so it looks like a straight on view, I got caught up in describing a general transform from 4x4 to 3x3. The 4x4 transform you get from ARTK is not the transform that will de warp the warker, it is the transform that moves the marker from the origin to its final position. To de warp the marker like you asked the process is similar but would be slightly different. I haven't done that before but here is my guess.
First, you need to get the 4x4 transform between where the marker is in world space, and where you would like it to appear to be after warping it. Right now the transform goes from the origin to the marker location. To change the transform to go from some point farther down on the z axis (say 100) to the marker location define the transform.
initial_marker_pose = [1,0,0,0
0,1,0,0
0,0,1,100
0,0,0,1];
Now you have the transform from the origin to what you want as your "inital" position, and the transform from the origin to your "final" position. To get the transform from initial to final simply
initial_to_final = origin_to_marker*initial_marker_pose.inv();
Now you would follow the process outlined in the link I gave you, in this case your initial zpos is no longer 0, it is 100. Then when you are finished you will need to invert your 3x3 matrix. That is because this process takes you from a straight on view to the one defined by the pose from ARTK and you want the opposite of that. You will need to experiment with the initial z position. The smaller it is, the larger your marker will appear after de-warping.
Hopefully that works, sorry for the confusion about your question.

Resources