How to rotate an SKShapeNode around its own axis? - ios

I want to rotate an SKShapeNode around its axis (a task I assumed would be easy to do in a framework like SpriteKit). The shape rotates, but around an axis that seems to be off screen...
Any ideas on what's going wrong here?
import SpriteKit
class Square: NSObject {
// this gets created and initialized later...
var shape: SKShapeNode? = nil
//…
func animateMe(){
let action = SKAction.rotateByAngle(CGFloat(M_PI), duration:1)
self.shape.runAction(action, completion: { [unowned self] () -> Void in
self.shape.removeFromParent()
})
}
}

Rotation happens around the node's position. So rather than giving the shape node a path with an x/y offset to position your shape, you should change the node's position, and give it a path which is centered around (0,0) so that it draws the shape near its position (because the path is drawn relative to the position as well).
See this answer as well.

Related

Align 3D object parallel to vertical plane detected by estametedVerticalPlane

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.

Move nodes along axis with respect to camera position

I currently have a rootNode with multiple children nodes attached to it that I want to move around the scene together as a cluster. I currently move it along the x and y axis by using left, right, up and down buttons by changing the position of the rootNode little by little every time the button is clicked, for example for moving left:
self.newRootNode.position.x = self.newRootNode.position.x - 0.01
This way, the cluster always moves with respect to the coordinate system set when the app is initialized. I'm trying to make it move with respect to the user's left and right everytime they change their position. I've tried doing it as follows:
let nodeCam = self.sceneView.session.currentFrame!.camera
let cameraTransform = nodeCam.transform
self.newRootNode.position.x = cameraTransform.columns.3.x - 0.01
I know this is not what I want, I must be missing a transform from the camera's position to the root node's position, but I'm not sure what steps to follow.
What would be the right way to approach this? Do I need to reset tracking every time the user changes position? Any help would be appreciated :)
I believe you can use this function on your ARSession now in ARKit 1.5:
func setWorldOrigin(relativeTransform: matrix_float4x4)
Which:
Changes the basis for the AR world coordinate space using the specified transform.
Here is an example (untested) which may point you in the right direction:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let currentFrame = augmentedRealitySession.currentFrame?.camera else { return }
let transform = currentFrame.transform
augmentedRealitySession.setWorldOrigin(relativeTransform: transform )
}
Not forgetting of course to use this ARSCNDebugOptions.showWorldOrigin
in order to debug and adjust your matrixes and transforms etc.
You could also use the code in the delegate callback as an IBAction etc..
Hope it helps...

Get vector in SCNNode environment from touch location swift

I have the position and orientation of my camera, the CGPoint touch location on the screen, I need the line (preferably vector) in the direction that I touched on the screen in my 3d SCNNode environment, how can I get this?
A code snippet would be very helpful.
You can use the SCNSceneRenderer.unprojectPoint(_:) method for this.
This method, which is implemented by SCNView, takes the coordinates of your point as a SCNVector3. Set the first two elements in the coordinate space of your view. Apple describes the use of the third element:
The z-coordinate of the point parameter describes the depth at which to unproject the point relative to the near and far clipping planes of the renderer’s viewing frustum (defined by its
pointOfView
node). Unprojecting a point whose z-coordinate is 0.0 returns a point on the near clipping plane; unprojecting a point whose z-coordinate is 1.0 returns a point on the far clipping plane.
You are not looking for the location of these points, but for the line that connects them. Just subtract both to get the line.
func getDirection(for point: CGPoint, in view: SCNView) -> SCNVector3 {
let farPoint = view.unprojectPoint(SCNVector3Make(point.x, point.y, 1))
let nearPoint = view.unprojectPoint(SCNVector3Make(point.x, point.y, 0))
return SCNVector3Make(farPoint.x - nearPoint.x, farPoint.y - nearPoint.y, farPoint.z - nearPoint.z)
}

Troubles to detect if a CGPoint is inside a square (diamond-shape)

I have 2 SKSpriteNode:
a simple square (A)
the same square with a rotation (-45°) (B)
I need to check, at any time, if the center of another SKSpriteNode (a ball) is inside one of these squares.
The ball and the squares have the same parent (the main scene).
override func update(_ currentTime: TimeInterval) {
let spriteArray = self.nodes(at: ball.position)
let arr = spriteArray.filter {$0.name == "square"}
for square in arr {
print(square.letter)
if(square.contains(self.puck.position)) {
print("INSIDE")
}
}
}
With the simple square (A), my code works correctly. The data are right. I know, at any time, if the CGPoint center is inside or outside the square.
But with the square with the rotation (B), the data aren't as desired. The CGPoint is detected inside as soon as it's in the square which the diamond-shape is contained.
The SKSpriteNode squares are created via the level editor.
How can I do to have the correct result for the diamond-shape?
EDIT 1
Using
view.showsPhysics = true
I can see the bounds of all the SKSpriteNode with physicsBody. The bounds of my diamond-square is the diamond-square and not the grey square area.
square.frame.size -> return the grey area
square.size -> return the diamond-square
In the Apple documentation, func nodes(at p: CGPoint) -> [SKNode], the method is about node and not frame, so why it doesn't work?
There are many ways to do it, usually I like to work with paths so , if you have a perfect diamond as you describe I would like to offer a different way from the comments, you could create a path that match perfectly to your diamond with UIBezierPath because it have the method containsPoint:
let f = square.frame
var diamondPath = UIBezierPath.init()
diamondPath.moveToPoint(CGPointMake(f.size.width-f.origin.x,f.origin.y))
diamondPath.addLineToPoint(CGPointMake(f.origin.x,f.size.height-f.origin.y))
diamondPath.addLineToPoint(CGPointMake(f.size.width-f.origin.x,f.size.height))
diamondPath.addLineToPoint(CGPointMake(f.size.width,f.size.height-f.origin.y))
diamondPath.closePath()
if diamondPath.containsPoint(<#T##point: CGPoint##CGPoint#>) {
// point is inside diamond
}

Passing UIBezierPath to view class for drawing

I am making a level based game with many different objects, all different. In each level, there will be different amounts of each type of object. Thus, I have been trying to make the drawing part as generic as possible so that all I have to do is pass in the coords and it will automatically draw. To do this, I have made a protocol that forces each object class to implement the method getBP(), which returns the UIBezierPath to draw for each. Then, the view class just has to say
Object.getBP().fill()
However, this has been leading to some strange problems. The object does not draw at the correct coordinates. The y coordinate is correct, but the x coordinate puts it always at the left of the screen. I think it may be the fact that the Bezier Path is not being created in the view class. Here is my code in Surface.swift (this is meant to draw a surface in the game):
func getBP() -> UIBezierPath {
var rect:CGRect
var length:Double = getSurfaceVector().getMagnitude()//length of the surface
var cx = points.1.x+(points.0.x-points.1.x)//center coords of the surface
var cy = points.1.y+(points.0.y-points.1.y)
var bp = UIBezierPath(roundedRect: CGRectMake(CGFloat(cx - length/2), CGFloat(cy-RECT_HEIGHT/2), CGFloat(length), CGFloat(RECT_HEIGHT)), cornerRadius: CGFloat(5))
let transform:CGAffineTransform = CGAffineTransformMakeRotation(CGFloat(Double(angle)*(Double(M_PI)/Double(180))))
bp.applyTransform(transform)
return bp
}
points is just a tuple with the start and end points of the surface. RECT_HEIGHT is the height of the rectangle that is drawn to represent the surface. angle is the angle from horizontal of the surface.
Creating the surface in View.swift, I do this:
Surface(fixed: true, points: (Vector(x: 50, y:100), Vector(x: Double(UIScreen.mainScreen().bounds.width), y: 100)))
I add that surface to the array of objects in the game. I draw it in the View.swift file by saying
surface.stroke()
The surface draws on the screen with a y value of 100, but it is centered at x = 0 so that it is half on and half off of the screen. Also, it doesn't draw at the angle - it is always horizontal. Is there some better way of doing this? What is happening?

Resources