Is it possible to fix a nodes orientation such that it no longer requires a lookAt constraint? I.e., "bake" the orientation of the node.
I'm making "bonds" (cylinders) between "atoms" (spheres). I place the cylinder node in a container node so as to re-orient its axes so the lookAt points the geometry's y-axis from one atom to another. So this all works but for two issues:
a) It seems to require delicate ordering in which these bonds are made else other bonds will pivot to undesirable positions. b) More importantly, it makes animation difficult.
So I'd like to use the lookAt to orient the bond then, somehow, get that orientation into a vector such that the bond becomes independant of the looked at node.
Edit: Here's a link to my ultimate answer. swift: orient y-axis toward another point in 3-d space
Related
Let's say I add a 3D model such as a dog as a child node to my scene's root node in ViewDidLoad. I printed out the dog node's transform and worldTransform properties, both of which are just 4x4 identity matrices.
After rotating, scaling, and positioning, I re-printed the transform and worldTransform properties. I could not understand how to read them. Which column refers to position, size, or orientation?
Under any transform, how do I figure out 1) which direction the front of the dog is facing, assuming that in viewDidLoad the front was facing (0,0,-1) direction, and 2) the height and width of the dog?
A full introduction to transform matrices is a) beyond the scope of a simple SO answer and b) such a basic topic in 3D graphics programming that you can find a zillion or two books, tutorials, and resources on the topic. Here are two decent writeups:
Linear Algebra for Graphics Programming at metalbyexample.com
Transformations at learnopengl.com
Since you're working with ARKit in SceneKit, though, there are a number of convenience utilities for working with transforms, so you often don't need to dig into the math.
which direction the front of the dog is facing?
A node is always "facing" toward (0,0,-1) in its local coordinate space. (Note this is SceneKit's intrinsic notion of "facing", which may or may not map to how any custom assets are designed. If you've imported an OBJ, DAE, or whatever file built in a 3D authoring tool, and in that tool the dog's nose is pointed to (0,0,-1), your dog is facing the "right" way.)
You can get this direction from SCNNode.simdLocalFront — notice it's a static/class property, because in local space the front is always the same direction for all nodes.
What you're probably more interested in is how the node's own idea of its "front" converts to world space — that is, which way is the dog facing relative to the rest of the scene. Here are two ways to get that:
Convert the simdLocalFront to world space, the way you can any other vector: node.convert(node.simdLocalFront, to: nil). (Notice that if you leave the to parameter nil, you convert to world space.)
Use the simdWorldFront property, which does that conversion for you.
the height and width of the dog?
Height and width have nothing to do with transform. A transform tells you (primarily) where something is and which way it's facing.
Assuming you haven't scaled your node, though, its bounding box in local space describes its dimensions in world space:
let (min, max) = node.boundingBox
let height = abs(max.y - min.y)
let width = abs(max.x - min.x)
let depthiness = abs(max.z - min.z)
(Note this is intrinsic height, though: if you have, say, a person model that's standing up, and then you rotate it so they're lying down, this height is still head-to-toe.)
At the moment, my code loads in a 3D model, creates a node using that model, and displays the node in the scene. Setting the scale/rotation (euler angles) of the node works fine. However, I'm trying to set the position of the node relative to the world origin, and I don't want the node to be attached to a plane.
I've tried setting node.position and node.worldPosition to no avail; although the position of the node changes, when the camera moves, the node doesn't stay static, but moves about with the camera. I'm new to using ARKit, so I'm probably doing something stupid, but I can't figure out what it is that I need to do, so any help would be much appreciated.
Edit:
The weird thing is that if I set the coordinates to say SCNVector3(0, 3, 0) it's fine, but if I go over a certain number of meters away it seems to fail. Is this expected
Firstly, poor world tracking can be caused by these common issues.
Poor ligthing. Causes low number of feature points available for tracking
Lack of texture Causes low number of feature points available for tracking
Fast Movement Causes blurry images which causes tracking to fail
However, what I believe is happening in your case (which is a little more tricky to debug)... is you are most likely placing the loaded model underneath the detected horizontal plane in the scene.
In other words, you may have positioned the SCNNode using a negative Y coordinate which places the node below the horizontal detected plane and causes the model to drift around as you change the cameraView
try setting the Y position of the node to either 0 or a small positive value like 0.1 metres
node.position = SCNVector3(0, 0, -1) // SceneKit/AR coordinates are in meters
sceneView.scene.rootNode.addChildNode(node)
z = -1 places the SCNNode 1 metre in front of your cameraView.
Note: I verified this issue myself using a playground I use for testing purposes.
the goal of the project is to create a drawing app. i want it so that when i touch the screen and move my finger it will follow the finger and leave a cyan color paint. i did created it BUT there is one problem. the paint DEPTH is always randomly placed.
here is the code, just need to connect the sceneView with the storyboard.
https://github.com/javaplanet17/test/blob/master/drawingar
my question is how do i make the program so that the depth will always be consistent, by consistent i mean there is always distance between the paint and the camera.
if you run the code above you will see that i have printed out all the SCNMatrix4, but i none of them is the DEPTH.
i have tried to change hitTransform.m43 but it only messes up the x and y.
If you want to get a point some consistent distance in front of the camera, you don’t want a hit test. A hit test finds the real world surface in front of the camera — unless your camera is pointed at a wall that’s perfectly parallel to the device screen, you’re always going to get a range of different distances.
If you want a point some distance in front of the camera, you need to get the camera’s position/orientation and apply a translation (your preferred distance) to that. Then to place SceneKit content there, use the resulting matrix to set the transform of a SceneKit node.
The easiest way to do this is to stick to SIMD vector/matrix types throughout rather than converting between those and SCN types. SceneKit adds a bunch of new accessors in iOS 11 so you can use SIMD types directly.
There’s at least a couple of ways to go about this, depending on what result you want.
Option 1
// set up z translation for 20 cm in front of whatever
// last column of a 4x4 transform matrix is translation vector
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
// get camera transform the ARKit way
let cameraTransform = view.session.currentFrame.camera.transform
// if we wanted, we could go the SceneKit way instead; result is the same
// let cameraTransform = view.pointOfView.simdTransform
// set node transform by multiplying matrices
node.simdTransform = cameraTransform * translation
This option, using a whole transform matrix, not only puts the node a consistent distance in front of your camera, it also orients it to point the same direction as your camera.
Option 2
// distance vector for 20 cm in front of whatever
let translation = float3(x: 0, y: 0, z: -0.2)
// treat distance vector as in camera space, convert to world space
let worldTranslation = view.pointOfView.simdConvertPosition(translation, to: nil)
// set node position (not whole transform)
node.simdPosition = worldTranslation
This option sets only the position of the node, leaving its orientation unchanged. For example, if you place a bunch of cubes this way while moving the camera, they’ll all be lined up facing the same direction, whereas with option 1 they’d all be in different directions.
Going beyond
Both of the options above are based only on the 3D transform of the camera — they don’t take the position of a 2D touch on the screen into account.
If you want to do that, too, you’ve got more work cut out for you — essentially what you’re doing is hit testing touches not against the world, but against a virtual plane that’s always parallel to the camera and a certain distance away. That plane is a cross section of the camera projection frustum, so its size depends on what fixed distance from the camera you place it at. A point on the screen projects to a point on that virtual plane, with its position on the plane scaling proportional to the distance from the camera (like in the below sketch):
So, to map touches onto that virtual plane, there are a couple of approaches to consider. (Not giving code for these because it’s not code I can write without testing, and I’m in an Xcode-free environment right now.)
Make an invisible SCNPlane that’s a child of the view’s pointOfView node, parallel to the local xy-plane and some fixed z distance in front. Use SceneKit hitTest (not ARKit hit test!) to map touches to that plane, and use the worldCoordinates of the hit test result to position the SceneKit nodes you drop into your scene.
Use Option 1 or Option 2 above to find a point some fixed distance in front of the camera (or a whole translation matrix oriented to match the camera, translated some distance in front). Use SceneKit’s projectPoint method to find the normalized depth value Z for that point, then call unprojectPoint with your 2D touch location and that same Z value to get the 3D position of the touch location with your camera distance. (For extra code/pointers, see my similar technique in this answer.)
What is the math that SCNLookAtConstraint is doing? I want to try to recreate this with vectors.
I think it can be done with a cross product and a dot product once you have the two directional vectors.
By default the node points in the direction of the negative z-axis of its local coordinate system.
The other direction we are interested in is from the node that looks to the other node, in the node that looks's local coordinate system. You can get it by converting the positions using convertPosition:fromNode: or
convertPosition:toNode:.
If not done already, normalize the two directional vectors.
With the two directions in the local coordinate system, a cross product between the two gives a vector that is orthogonal to the plane that can be formed between the two directions. This vector is the surface normal to that plane. Any rotation around that normal is going to be another vector that remains in the plane.
Since the two directions are normalized, a dot product of the two should give you cos(ϴ), where ϴ is the angle between the two.
Rotating the first vector (the one that points in the direction of the negative z-axis) by this angle around the normal to the plane should make it point in the same direction as the second vector (that one that points at the other node).
That should be the way it's done for two vectors (or at least one way to do it).
To do it for a node, you would set a rotation of that angle around that axis, to the node that is looking. This would rotate the node so that it's local negative z-axis (the direction it's looking) would point at the other node.
I have a very similar example in one of the chapters for 3D Graphics with Scene Kit, where a node is rotated to point straight out of the surface of a sphere. You can look at the sample code to see how it's solved there.
I'm newbie in XNA, so sorry about the simple question, but I can't find any solution.
I've got simple model (similar to flat cuboid), which I cannot change (model itself). I would like to create rotate animation. In this particular problem, my model is just a cover of piano. However, the axis over which I'm going to rotate is covered by cover's median. As a result, my model is rotating like a turbine, instead of opening and closing.
I would like to rotate my object over given "line". I found Matrix.CreateLookAt(currentPosition, dstPosition, Vector.Up); method, but still don't know how o combine rotation with such matrix.
Matrix.CreateLookAt is meant for use in a camera, not for manipulating models (although I'm sure some clever individuals who understand what sort of matrix it creates have done so).
What you are wanting to do is rotate your model around an arbitrary axis in space. It's not an animation (those are created in 3D modeling software, not the game), it's a transformation. Transformations are methods by which you can move, rotate and scale a model, and are obviously the crux of 3D game graphics.
For your problem, you want to rotate this flat piece around its edge, yes? To do this, you will combine translation and axis rotation.
First, you want to move the model so the edge you want to rotate around intersects with the origin. So, if the edge was a straight line in the Z direction, it would be perfectly aligned with the Z axis and intersecting 0,0,0. To do this you will need to know the dimensions of your model. Once you have those, create a Matrix:
Matrix originTranslation = Matrix.CreateTranslation(new Vector3(-modelWidth / 2f, 0, 0))
(This assumes a square model. Manipulate the Vector3 until the edge you want is intersecting the origin)
Now, we want to do the rotating. This depends on the angle of your edge. If your model is a square and thus the edge is straight forward in the Z direction, we can just rotate around Vector3.Forward. However, if your edge is angled (as I imagine a piano cover to be), you will have to determine the angle yourself and create a Unit Vector with that same angle. Now you will create another Matrix:
Matrix axisRotation = Matrix.CreateFromAxisAngle(myAxis, rotation)
where myAxis is the unit vector which represents the angle of the edge, and rotation is a float for the number of radians to rotate.
That last bit is the key to your 'animation'. What you are going to want to do is vary that float amount depending on how much time has passed to create an 'animation' of the piano cover opening over time. Of course you will want to clamp it at an upper value, or your cover will just keep rotating.
Now, in order to actually transform your cover model, you must multiply its world matrix by the two above matrices, in order.
pianoCover.World *= originTranslation * axisRotation;
then if you wish you can translate the cover back so that its center is at the origin (by multiplying by a Transform Matrix with the Vector3 values negative of what you first had them), and then subsequently translate your cover to wherever it needs to be in space using another Transform Matrix to that point.
So, note how matrices are used in 3D games. A matrix is created using the appropriate Matrix method in order to create qualities which you desire (translation, rotation around and axis, scale, etc). You make a matrix for each of these properties. Then you multiply them in a specific order (order matters in matrix multiplication) to transform your model as you wish. Often, as seen here, these transformations are intermediate in order to get the desired effect (we could not simply move the cover to where we wanted it then rotate it around its edge; we had to move the edge to the origin, rotate, move it back, etc).
Working with matrices in 3D is pretty tough. In fact, I may not have gotten all that right (I hope by now I know that well enough, but...). The more practice you get, the better you can judge how to perform tasks like this. I would recommend reading tutorials on the subject. Any tutorial that covers 3D in XNA will have this topic.
In closing, though, if you know 3D Modeling software well enough, I would probably suggest you just make an actual animation of a piano and cover opening and closing and use that animated model in your game, instead of using models for both the piano and cover and trying to keep them together.