DragAlongSurface Script moves object back to initial position after drag is finished - 8thwall-xr

Currently, I have a Plane with Layer "Surface" and DragAlongSurface Script Attached. I have the table gameobject from the example and it also the surface controller attached to it. When I try to move the object, it moves to the desired location but comes back to its initial position after the drag is over. Please suggest a way to make object stay at the final position.

It seems like you have things swapped. You'll want XRSurfaceController attached to the Plane (which should be on the "Surface" layer). DragAlongSurface should be attached to the object you wish to drag around (the table, which should NOT be on the "surface" layer).

Related

Konva object snapping with transformer jitters

I'm trying to make an editor using Konva.js.
In the editor I show a smaller draw area which becomes the final image. For this I'm using a group with a clipFunc. This gives a better UX since the transform controls of the transformer can be used "outside" of the canvas (visible part for the user) and allow the user to zoom and drag the draw area around (imagine a frame in Figma).
I want to implement object snapping based on this: https://konvajs.org/docs/sandbox/Objects_Snapping.html. (just edges and center for now) However I want it to be able to work when having multiple elements selected in my Transformer.
The strategy I'm using is basically calculating the snapping based on the .back element created by the transformer. When I know how much to snap I apply it to every node within the transformer.
However when doing it, the items starts jittering when moving the cursor close to the snapping lines.
My previous implementation was having the draw area fill the entire Stage, which I did manage to get working with the same strategy. (no jitter)
I don't really know what the issue is and I hope some of you guys can point me in the right direction.
I created a codepen to illustrate my issue: https://codesandbox.io/s/konva-transformer-snapping-1vwjc2?file=/src/index.ts

The object gets blocked by the background in Spark AR

I have a Spark AR effect with custom background (which is masked out where a person is detected).
I have also a 3d object attached in front of the user forehead.
The problem is that the object gets hidden when the user goes slightly farther from the camera, because(?) the view gets blocked by the custom background which becomes closer to the camera than the object.
Is there a way to keep the object fully visible, no matter how far the user goes from the camera?
The only workaround I can make up is to prevent the z coordinate from being less than zero, but it's far from ideal, because I need to keep the object at the same distance to the forehead.
You need to uncheck "Use Depth Test" and "Write to Depth Test" in the material for the object you would like to remain visible.
In the Scene hierarchy, move the object above your canvas/background.
so on the material property u will see Advance render option and if u click it u will see " use depth first " and u have to uncheck it . make sure ur rectangle is before the bg rectangle

Swift SceneKit - I am trying to figure out if a Node object goes off the screen

Using SceneKit
I want to make the gray transparent box to disappear and only show the colored boxes when the user zooms in.
So I want to detect when that box's edges are starting to fall off the screen as I zoom, so I can hide the gray box accordingly.
First thoughts, but there may be better solutions:
You could do an unprojectPoint on the node and check against screen coordinates, do the +/- math on object size and skip Z. I "think" that would work
You can do some physics based collision detection against an invisible box or plane geometries that acts as your screen edges, has some complexity if your view is changing, but testing would be easy - just leave visible until you get what you want, then isVisible=false
isNode(insideFrustomof: ) - returns boolean on whether it "might" be visible. I'm assuming "might" means obscured by other geometry which in your case, shouldn't matter (edit) on second thought, that doesn't solve your problem but I'll leave it in here for reference.

How to replace SCNNode with another on the same place with points on marker?

My AR example app shows heart 3D model in touch. When retouching on the 3D model I want to show parts marker on the heart. For that, I created 2 models one for showing plain another one for showing heart with parts arrow marker.
But I don't know how to place a marker on the exact place where I placed the previous node. A node will be changed only on tap, scale, transformation, position everything has to be same.
Can you tell me how to do it?
Create a empty node on tap and add your heart model as its child node. Now on next tap remove heart node and add marker node as child node.

How to drag and drop multiple SpriteKit Nodes with the same parent?

I have multiple SKSpriteNodes(some rectangles) which are draggable (I followed the tutorial on Sprite Kit Tutorial: How To Drag and Drop Sprites). When a collision happens between them, I group them (by making the one rectangle a parent and the other a child). No matter how many rectangles I will combine, I manage always to have one parent and multiple rectangles that belong to it. I am doing this cause I want to move cubes belonging in a group together and I observed that if I move the parent, I move all of its children. What I am doing to achieve this is to transform the group at touchBegin and to make the touchedNode a parent and all the other nodes of the group children of this new parent. I believe that the following image may make things a little bit more clear.
The problem I am facing is that I can drag the group even if I touch at the white space (shown with red circle) included between the horizontal and vertical rectangles. As all rectangles shown in the image have the same parent, I guess that there is a bounding box that include them all and this is why the white space in the middle can trigger a drag event.
Does anyone have any idea how I can deal with this issue?
Is it possible to have a bounding box as shown in the following image?
Thanks in advance.
You need to write custom hit testing to perform this kind of trick.
For every click -> For every box (within certain range of touch) -> For every other box (within certain range of touch) -> Combine the two box frames into one (CGRectUnion(<#CGRect r1#>, <#CGRect r2#>)) and see if your finger is within the rect.
This might give results for a lot of dispersed rectangles, so limit your initial search of boxes to a given range from the touch itself.
Apart from that it's just simple code.

Resources