Calculate Point position relative to origin when image is rotated - opencv

I have a robot-arm here with a camera attached. The camera is fixed to the arm and takes photos in the arms rotation / position.
I use opencv to detect certain points within the image and need to translate my detected coordinates back to the coordinate system of my robot-arm. (To move over them)
I'm struggling to figure out how to transform my points (all the given information is: Origin of my arms coordinate system, arm position and point position inside the image)
Here is an image (hopefully) explaining what I want to achieve:
In Addition I need to subtract X units from my arms length, since the pickup tool is on the tip of the arm and the camera before it.
This however should be possible by transforming the coordinates to angle+length, subtracting X and transforming them back.

Related

finding the depth in arkit with SCNVector3Make

the goal of the project is to create a drawing app. i want it so that when i touch the screen and move my finger it will follow the finger and leave a cyan color paint. i did created it BUT there is one problem. the paint DEPTH is always randomly placed.
here is the code, just need to connect the sceneView with the storyboard.
https://github.com/javaplanet17/test/blob/master/drawingar
my question is how do i make the program so that the depth will always be consistent, by consistent i mean there is always distance between the paint and the camera.
if you run the code above you will see that i have printed out all the SCNMatrix4, but i none of them is the DEPTH.
i have tried to change hitTransform.m43 but it only messes up the x and y.
If you want to get a point some consistent distance in front of the camera, you don’t want a hit test. A hit test finds the real world surface in front of the camera — unless your camera is pointed at a wall that’s perfectly parallel to the device screen, you’re always going to get a range of different distances.
If you want a point some distance in front of the camera, you need to get the camera’s position/orientation and apply a translation (your preferred distance) to that. Then to place SceneKit content there, use the resulting matrix to set the transform of a SceneKit node.
The easiest way to do this is to stick to SIMD vector/matrix types throughout rather than converting between those and SCN types. SceneKit adds a bunch of new accessors in iOS 11 so you can use SIMD types directly.
There’s at least a couple of ways to go about this, depending on what result you want.
Option 1
// set up z translation for 20 cm in front of whatever
// last column of a 4x4 transform matrix is translation vector
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
// get camera transform the ARKit way
let cameraTransform = view.session.currentFrame.camera.transform
// if we wanted, we could go the SceneKit way instead; result is the same
// let cameraTransform = view.pointOfView.simdTransform
// set node transform by multiplying matrices
node.simdTransform = cameraTransform * translation
This option, using a whole transform matrix, not only puts the node a consistent distance in front of your camera, it also orients it to point the same direction as your camera.
Option 2
// distance vector for 20 cm in front of whatever
let translation = float3(x: 0, y: 0, z: -0.2)
// treat distance vector as in camera space, convert to world space
let worldTranslation = view.pointOfView.simdConvertPosition(translation, to: nil)
// set node position (not whole transform)
node.simdPosition = worldTranslation
This option sets only the position of the node, leaving its orientation unchanged. For example, if you place a bunch of cubes this way while moving the camera, they’ll all be lined up facing the same direction, whereas with option 1 they’d all be in different directions.
Going beyond
Both of the options above are based only on the 3D transform of the camera — they don’t take the position of a 2D touch on the screen into account.
If you want to do that, too, you’ve got more work cut out for you — essentially what you’re doing is hit testing touches not against the world, but against a virtual plane that’s always parallel to the camera and a certain distance away. That plane is a cross section of the camera projection frustum, so its size depends on what fixed distance from the camera you place it at. A point on the screen projects to a point on that virtual plane, with its position on the plane scaling proportional to the distance from the camera (like in the below sketch):
So, to map touches onto that virtual plane, there are a couple of approaches to consider. (Not giving code for these because it’s not code I can write without testing, and I’m in an Xcode-free environment right now.)
Make an invisible SCNPlane that’s a child of the view’s pointOfView node, parallel to the local xy-plane and some fixed z distance in front. Use SceneKit hitTest (not ARKit hit test!) to map touches to that plane, and use the worldCoordinates of the hit test result to position the SceneKit nodes you drop into your scene.
Use Option 1 or Option 2 above to find a point some fixed distance in front of the camera (or a whole translation matrix oriented to match the camera, translated some distance in front). Use SceneKit’s projectPoint method to find the normalized depth value Z for that point, then call unprojectPoint with your 2D touch location and that same Z value to get the 3D position of the touch location with your camera distance. (For extra code/pointers, see my similar technique in this answer.)

Recreate the 3D outlines of a City street in iOS SceneKit with OSM XML data

What is best strategy to recreate part of a street in iOS SceneKit using .osm XML data?
Please assume part of a street is offered in the OSM XML data and contains the necessary geopoints with latitude and longitude denoting the Nodes to describe the paths/footprints of 6 buildings (i.e. ground floor plans that line the side of a street).
Specifically, what's the best strategy to convert latitude and longitude Nodes in order to locate these building footprints/polygons on the ground floor in a scene within SceneKit iOS? (i.e. running through position 0,0,0)? Thank you.
Very roughly and briefly, based on my own experience with 3D map rendering:
Transform the XML data from lat/long to appropriate coordinates for a 2D map (that is, project it to a plane using a map projection, then apply a 2D affine transform to get it into screen pixel coordinates). Create a 2D map that's wider and taller than the actual screen, because of what's going to happen in step 2:
Using a 3D coordinate system with your map vertical (i.e., set all the Z coordinates to zero), rotate the map so that it reclines at an appropriate shallow angle, as if you're in an aeroplane looking down on it; the angle might be 30 degrees from horizontal. To rotate the map you'll need to create a 3D rotation matrix. The axis of rotation will be the X axis: that is, the horizontal line that is the bottom border of your 2D map. The rotation is exactly the same as what happens when you rotate your laptop screen away from you.
Supply the new 3D coordinates to your rendering system. I haven't used SceneKit but I had a quick look at the documentation and you can use any coordinate system you like, so you will be able to use one that is convenient for the process I have just described: something that uses units the size of a screen pixel at the viewing plane, with Y going upwards, X going right, and Z going away from the viewer.
One final caveat: if you want to add extrusions giving a rough approximation of the 3D building shapes (such data is available in OSM for some areas) note that my scheme requires the tops of buildings, and indeed anything above ground level, to have negative Z coordinates.
Pretty simple. First, convert Your CLLocationCoordinate2D to a MKMapPoint, which is exactly the same as a CGRect. Second, scale down the MKMapPoint by some arbitrary number so it fits in with how you want it on your scene graph, let's say by 200. Since scenekit's coordinate system is centered at (0,0), you'll need to make sure your location is correct. Then just create your scnvector3's with the x/y of he MKMapPoint, and you will be locked to coordinates.

Camera pose and reflections using OpenCV's SovePNP

I'm trying to use the function SolvePNP to estimate the relative position of a camera. Mi question is this, when choosing world coordinates, do I need to be careful in choosing them so that there can be no reflections when transforming them to camera coordinates? Or will OpenCV correct that for me?
Details: I'm filming a tennis court and was originally setting the world coordinate origin to be the centre of the court, with the x-axis pointing parallel to the net towards the left, the y-axis pointing forwards vertically on the court, and the z-axis pointing upwards. If I've understood correctly, SolvePNP will transform these coordinates to a system with origin at some point behind the top left corner of an image, with x-axis pointing downwards on the image, y-axis pointing to the right, and z-axis pointing forwards to the scene. However this transformation would definitely involve a reflection, must I swap the x and y axis of my world coordinates to avoid this or is it fine to leave them as they are? (Also, let me know if I'm making a big mistake and SolvePnp actually puts the origin at a point behind the centre of the image rather than one the top left corner...)
Assuming that you have a camera calibration matrix (and that such calibration was done assuming a right hand coordinate system all along), and correct correspondences between the tennis field features in the image and the CAD-features:
You need to select the reference frame in the tennis court such that is a right hand coordinate system, so that your solution from solvePNP provides the pose and position of the tennis field reference frame with respect to the camera coordinate system (by default a right hand coordinate system).
Hope it helps

Calculating 3D coordinates of an Object with a Single Phone Camera

I have a phone camera that’s viewing a planar object. I know the real world measurements of the object. Considering the top left corner of the object as the origin, I calculate the coordinates using the real world measurements. With the object detection algorithm I am able to get the coordinates of the detected object on the image, which is in pixels. (again going by the fact that the image's origin is on the top left hand corner). I obtain the Rotation and Translation matrix using solvepnp(). Now is it possible (with the obtained parameters) to find the distance and the height of the object with respect to the first frame?

What is this rotation behavior in XNA?

I am just starting out in XNA and have a question about rotation. When you multiply a vector by a rotation matrix in XNA, it goes counter-clockwise. This I understand.
However, let me give you an example of what I don't get. Let's say I load a random art asset into the pipeline. I then create some variable to increment every frame by 2 radians when the update method runs(testRot += 0.034906585f). The main thing of my confusion is, the asset rotates clockwise in this screen space. This confuses me as a rotation matrix will rotate a vector counter-clockwise.
One other thing, when I specify where my position vector is, as well as my origin, I understand that I am rotating about the origin. Am I to assume that there are perpendicular axis passing through this asset's origin as well? If so, where does rotation start from? In other words, am I starting rotation from the top of the Y-axis or the x-axis?
The XNA SpriteBatch works in Client Space. Where "up" is Y-, not Y+ (as in Cartesian space, projection space, and what most people usually select for their world space). This makes the rotation appear as clockwise (not counter-clockwise as it would in Cartesian space). The actual coordinates the rotation is producing are the same.
Rotations are relative, so they don't really "start" from any specified position.
If you are using maths functions like sin or cos or atan2, then absolute angles always start from the X+ axis as zero radians, and the positive rotation direction rotates towards Y+.
The order of operations of SpriteBatch looks something like this:
Sprite starts as a quad with the top-left corner at (0,0), its size being the same as its texture size (or SourceRectangle).
Translate the sprite back by its origin (thus placing its origin at (0,0)).
Scale the sprite
Rotate the sprite
Translate the sprite by its position
Apply the matrix from SpriteBatch.Begin
This places the sprite in Client Space.
Finally a matrix is applied to each batch to transform that Client Space into the Projection Space used by the GPU. (Projection space is from (-1,-1) at the bottom left of the viewport, to (1,1) in the top right.)
Since you are new to XNA, allow me to introduce a library that will greatly help you out while you learn. It is called XNA Debug Terminal and is an open source project that allows you to run arbitrary code during runtime. So you can see if your variables have the value you expect. All this happens in a terminal display on top of your game and without pausing your game. It can be downloaded at http://www.protohacks.net/xna_debug_terminal
It is free and very easy to setup so you really have nothing to lose.

Resources