I am adding Nodes in ARSCNView as child nodes and clone nodes depending on what i choose from the menu. The object is placed where I tap on the screen. How to translate and scale the specific node.
Use transform attribute of SCNNode for that:
The transformation is the combination of the node’s rotation ,
position , and scale properties. The default transformation is
SCNMatrix4Identity.
When you set the value of this property, the
node’s rotation, orientation, eulerAngles, position, and scale
properties automatically change to match the new transform, and vice
versa. SceneKit can perform this conversion only if the transform you
provide is a combination of rotation, translation, and scale
operations. If you set the value of this property to a skew
transformation or to a nonaffine transformation, the values of these
properties become undefined. Setting a new value for any of these
properties causes SceneKit to compute a new transformation, discarding
any skew or nonaffine operations in the original transformation. You
can animate changes to this property’s value. See Animating SceneKit
Content.
Or you can use:
position,
rotation,
eulerAngles,
orientation,
scale
You can look how Apple's sample code are doing same thing here
With its VirtualObjectManager and recognizers for Scale/Translate.
Related
When I add a new node with ARKit (ARSKView), the object is positioned base on the device camera. So if your phone is facing down or tilted, the object will be in that direction as well. How can I instead place the object base on the horizon?
For that, right after a new node's creation, use a worldOrientation instance property that controls the node's orientation relative to the scene's world coordinate space.
var worldOrientation: SCNQuaternion { get set }
This quaternion isolates the rotational aspect of the node's worldTransform matrix, which in turn is the conversion of the node's transform from local space to the scene's world coordinate space. That is, it expresses the difference in axis and angle of rotation between the node and the scene's rootNode.
let worldOrientation = sceneView.scene.rootNode.worldOrientation
yourNode.rotation = worldOrientation /* X, Y, Z, W components */
P.S. (as you updated your question) :
If you're using SpriteKit, 2D sprites you spawn in ARSKView are always face the camera. So, if the camera moves around a definite point of real scene, all the sprites must be rotated about their pivot point, still facing a camera.
Nothing can prevent you from using SceneKit and SpriteKit together.
the goal of the project is to create a drawing app. i want it so that when i touch the screen and move my finger it will follow the finger and leave a cyan color paint. i did created it BUT there is one problem. the paint DEPTH is always randomly placed.
here is the code, just need to connect the sceneView with the storyboard.
https://github.com/javaplanet17/test/blob/master/drawingar
my question is how do i make the program so that the depth will always be consistent, by consistent i mean there is always distance between the paint and the camera.
if you run the code above you will see that i have printed out all the SCNMatrix4, but i none of them is the DEPTH.
i have tried to change hitTransform.m43 but it only messes up the x and y.
If you want to get a point some consistent distance in front of the camera, you don’t want a hit test. A hit test finds the real world surface in front of the camera — unless your camera is pointed at a wall that’s perfectly parallel to the device screen, you’re always going to get a range of different distances.
If you want a point some distance in front of the camera, you need to get the camera’s position/orientation and apply a translation (your preferred distance) to that. Then to place SceneKit content there, use the resulting matrix to set the transform of a SceneKit node.
The easiest way to do this is to stick to SIMD vector/matrix types throughout rather than converting between those and SCN types. SceneKit adds a bunch of new accessors in iOS 11 so you can use SIMD types directly.
There’s at least a couple of ways to go about this, depending on what result you want.
Option 1
// set up z translation for 20 cm in front of whatever
// last column of a 4x4 transform matrix is translation vector
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
// get camera transform the ARKit way
let cameraTransform = view.session.currentFrame.camera.transform
// if we wanted, we could go the SceneKit way instead; result is the same
// let cameraTransform = view.pointOfView.simdTransform
// set node transform by multiplying matrices
node.simdTransform = cameraTransform * translation
This option, using a whole transform matrix, not only puts the node a consistent distance in front of your camera, it also orients it to point the same direction as your camera.
Option 2
// distance vector for 20 cm in front of whatever
let translation = float3(x: 0, y: 0, z: -0.2)
// treat distance vector as in camera space, convert to world space
let worldTranslation = view.pointOfView.simdConvertPosition(translation, to: nil)
// set node position (not whole transform)
node.simdPosition = worldTranslation
This option sets only the position of the node, leaving its orientation unchanged. For example, if you place a bunch of cubes this way while moving the camera, they’ll all be lined up facing the same direction, whereas with option 1 they’d all be in different directions.
Going beyond
Both of the options above are based only on the 3D transform of the camera — they don’t take the position of a 2D touch on the screen into account.
If you want to do that, too, you’ve got more work cut out for you — essentially what you’re doing is hit testing touches not against the world, but against a virtual plane that’s always parallel to the camera and a certain distance away. That plane is a cross section of the camera projection frustum, so its size depends on what fixed distance from the camera you place it at. A point on the screen projects to a point on that virtual plane, with its position on the plane scaling proportional to the distance from the camera (like in the below sketch):
So, to map touches onto that virtual plane, there are a couple of approaches to consider. (Not giving code for these because it’s not code I can write without testing, and I’m in an Xcode-free environment right now.)
Make an invisible SCNPlane that’s a child of the view’s pointOfView node, parallel to the local xy-plane and some fixed z distance in front. Use SceneKit hitTest (not ARKit hit test!) to map touches to that plane, and use the worldCoordinates of the hit test result to position the SceneKit nodes you drop into your scene.
Use Option 1 or Option 2 above to find a point some fixed distance in front of the camera (or a whole translation matrix oriented to match the camera, translated some distance in front). Use SceneKit’s projectPoint method to find the normalized depth value Z for that point, then call unprojectPoint with your 2D touch location and that same Z value to get the 3D position of the touch location with your camera distance. (For extra code/pointers, see my similar technique in this answer.)
I have a UIView in the middle of the screen, 100x100 in this case, and it should function like a "target" guideline for the user.
When the user presses the Add button, a SCNBox should be added on the world at the exact same spot, with width / height / scale / distance / rotation corresponding the UIView's size and position on the screen.
This image may help understanding what I mean.
The UIView's size may vary, but it will always be rectangular and centered on the screen. The corresponding 3D model also may vary. In this case, a square UIView will map to a box, but later the same square UIView may be mapped into a cylinder with corresponding diameter (width) and height.
Any ideas how I can achieve something like this?
I've tried scrapping the UIView and placing the box as the placeholder, as a child of the sceneView.pointOfView, and later converting it's position / parent to the rootNode, with no luck.
Thank you!
Get the center position of the UIView in its parent view, and use the SCNRenderer’s unprojectPoint method to convert it to 3D coordinates in the scene, to get the position where the 3D pbject should be placed. You will have to implement some means to determine the z value.
Obviously the distance of the object will determine its size in screen but if you also want to scale the object you could use the inverse of the zoom level of your camera as the scale. With zoom level I mean your own means of zooming (e.g. moving the camera closer than a default distances would create smaller scale models). If the UIView changes in size, you could in addition or instead of the center point unproject all its corner points individually to 3D space, but it may just be easier to convert the scale of the uiview to the scale of the 3D node. For example, if you know the maximum size the UIView will be you can express a smaller view as a percentage of its max size and use the same percentage to scale the object.
You also mentioned the rotation of the object should correspond to the UIView. I assume that means you want the object to face the camera. For this you can apply a rotation to the object based on the .transform property of the pointOfView node.
I am trying to find out the reason why when I apply affine transformations on an image in OpenCV, the result of it is not visible in the preview window, but the entire window is black.How can I find workaround for this problem so that I can always view my transformed image (the result of the affine transform) in the window no matter the applied transformation?
Update: I think that this happens because all the transformations are calculated with respect to the origin of the coordinate system (top left corner of the image). While for rotation I can specify the center of the rotation, and I am able to view the result, when I perform scaling I am not able to control where the transformed image goes. Is it possible to somehow move the coordinate system to make the image fit in the window?
Update2: I have an image which contains only ROI at some position in it (the rest of the image is black), and I need to apply a set of affine transforms on it. To make things simpler and to see the effect of each individual transform, I applied each transform one by one. What I noticed is that, whenever I move (translate) the image such that the center of the ROI is in the center of the coordinate system (top left corner of the view window), all the affine transforms perform correctly without moving. However, by translating the center of ROI at the center of the coordinate system, the upper and the left part of the ROI remain cut out of the current view window.
If I move ROI's central point to another point in the view window (for example the window center), an affine transform of type:
A=[a 0 0; 0 b 0] (A is 2x3 matrix, parameter of the warpAffine function)
moves the image (ROI), outside of the view window (which doesn't happen if the ROI's center is in the top-left corner). How can I modify the affine transform so the image doesn't move out of its place (behaves the same way as when the ROI center is in the center of the coordinate system)?
If you want to be able to apply any affine transform, you will not always be able to view it. A better idea might be to manually apply your transform to 4 corners of a square and then look at the coordinates where those 4 points end up. That will tell you where your image is going.
If you have several transforms, just combine them into one transform. If you have 3 transforms
[A],[B],[C]
transforming an image by A,then B, then C is equivalent to transforming the image once by
[C]*[B]*[A]
If your transforms are in 2x3 matrices, just convert them to 3x3 matrices by adding
[0,0,1]
as the new bottom row, then multiply the 3x3 matrices together, when you are finished the bottom row will be unchanged, then just drop it to get your new, combined affine transform
Update
If you want to apply a transform to an object as if the object were somewhere else. You can combine 3 transforms. First translate the object to the location you want it to be transformed in (center of coordinate system in your case) with an affine transform [A]. Then apply your scaling transform [B], then a translation back to where you started. The translation back should be the inverse of [A]. That means your final transform would be
final_transform = [A].inv()*[B]*[A]
order of operations reads right to left when doing matrix multiplication.
I have a texture with 250px width and 2000px height. 250x250 part of it drawn on screen according to various conditions (some kind of sprite sheet, yes). All I want is to draw it within a fixed destination rectangle with some rotation. Is it possible?
Yes. Here's how to effectively rotate your destination rectangle:
Take a look at the overloads for SpriteBatch.Draw.
Notice that none of the overloads that take a Rectangle as a destination take a rotation parameter. It's because such a thing does not make much sense. It's ambiguous as to how you want the destination rotated.
But you can achieve the same effect as setting a destination rectangle by careful use of the position and scale parameters. Combine these with the origin (centroid of scaling and rotation, specified in pixels in relation to your sourceRectangle) and rotation parameters to achieve the effect you want.
(If, on the other hand, you want to "fit" to a rectangle - effectively scaling after rotating - you would have to also use the transformMatrix parameter to Begin.)
Now - your question isn't quite clear on this point: But if the effect you are after is more like rotating your source rectangle, this is not something you can achieve with plain ol' SpriteBatch.
The quick-and-dirty way to achieve this is to set a viewport that acts as your destination rectangle. Then draw your rotated sprite within it. Note that SpriteBatch's coordinate system is based on the viewport, not the screen.
The "nicer" (but much harder to implement) way to do it would be to not use SpriteBatch at all, but implement your own sprite drawing that will allow you to rotate the texture coordinates.