I am developing a molecular visualizer for macOS / iPadOS with SceneKit. Long story short, I want that when the user clicks (or touches) the screen at a certain position, a new atom is placed (in this example just a SCNSphere).
Previously, I had the allowsCameraControl property of the SCNView active, which allowed me to freely move the camera and with the unprojectPoint() method, I could successfully place a new node at touch location. The limitation of the default camera controller is that it does not zoom. When you pinch the screen, it changes the FOV property of the camera instead of moving it through the Z axis.
Therefore, I made a custom camera node with a SCNCamera. I succesfully recreated the default camera behaviour (movement, rotation) and furthermore I am able to correclty zoom into the scene. The downside of this is that the unprojectPoint() method no longer works as expeced, as the new nodes are placed at a very close position of the camera node itself. No matter where I click on the scene, that the unprojected point will always be very close to 0, 0, 10
internal func newNodeAt(point: CGPoint) {
let pointVector = SCNVector3(point.x, point.y, 0.8)
let position = self.unprojectPoint(pointVector)
print("x:\(position.x), y: \(position.y), z: \(position.z)")
let newSphere = SCNSphere(radius: 1)
let newNode = SCNNode(geometry: newSphere)
self.scene?.rootNode.addChildNode(newNode)
}
The camera node is setup as folows and its directly attached to the scene root node.
internal func setupCameraNode() -> SCNNode {
let cam = SCNCamera()
cam.name = "camera"
cam.zFar = 200
cam.zNear = 0.1
let camNode = SCNNode()
camNode.camera = cam
camNode.position = SCNVector3(0, 0, 5)
camNode.name = "Camera node"
return camNode
}
These are the printed positions after clicking on random positions of the scene.
x:-0.1988764852285385, y: -0.05589345842599869, z: 10.920427322387695
x:-0.18989555537700653, y: 0.14564114809036255, z: 10.920427322387695
x: 0.2168566882610321, y: 0.13085339963436127, z: 10.920427322387695
x: 0.24202580749988556, y: -0.15493911504745483, z: 10.920427322387695
x:-0.06516486406326294, y: -0.1781780868768692, z: 10.920427322387695
x:-0.08134553581476212, y: 0.12478446960449219, z: 10.920427322387695
x:-0.25866374373435974, y: 0.1456427276134491, z: 10.920427322387695
x: 0.217658132314682, y: 0.16270162165164948, z: 10.920427322387695
x: 0.2053154855966568, y: -0.12679903209209442, z: 10.920427322387695
I suppose that the unprojectPoint() is somehow related to the point of view? But I do not know how to fix this. Thanks.
I think you are on the right track, you just have to provide some kind of depth reference for the user. This is my code for similar, but when I call airStrike, I deal with the depth based on a plane facing the user and that's how I know where Z needs to be.
Just a guess without a visual, but seems there are a couple of options. Create a reference plane in the middle of the molecule and ++/-- that to show where the tap will land from a depth perpective.
Or just let them put it anywhere, then select it and depth++/depth-- to get it in the right position.
#objc func handleTap(recognizer: UITapGestureRecognizer)
{
let location: CGPoint = recognizer.location(in: gameScene)
if(data.isAirStrikeModeOn == true)
{
let projectedPoint = gameScene.projectPoint(SCNVector3(0, 0, 0))
let scenePoint = gameScene.unprojectPoint(SCNVector3(location.x, location.y, CGFloat(projectedPoint.z)))
gameControl.airStrike(position: scenePoint)
}
}
After days of testing I figured out a workaround and now I can place the nodes correctly where they should be.
My node tree was is like this:
RootNode
CameraNode
atomNodes
atom (individual spheres)
Therefore, all I had to do was to convert the unprojected position from the RootNode (which I suppose is the one that the camera takes the reference from) to the atomNodes, thus:
let unprojected = unprojectPoint(SCNVector3(location.x, location.y, 0.99))
let position = atomNodes.convertPosition(unprojected, from: rootNode)
The 0.99 is just a nice Z position in my view for the spheres to be placed. (More info here)
My advice would be to always check the node tree because the positions are relative to each other.
Related
let material = SimpleMaterial.init(color: .red,roughness: 1,isMetallic: false)
let doorBox = MeshResource.generateBox(width: 0.02,height: 1, depth: 0.5)
let doorEntity = ModelEntity(mesh: doorBox, materials: [material])
let anchor = ARAnchorEntity()
anchor.addChild(doorEntity)
In RealityKit, I am having box which is MeshResource, Box looks like a line. I have added this box in ARView, and have set realtime camera position. In one scenario I want to know Box/Line’s starting and ending position.
Lets say box with entity has middle/current position (0.1,0.23,-1.3) then what will be box’s left and right position ? Anchor with box is keep changing it's position with camera movement.
Thanks in advance.
Check explanation with the image
You can use this extension.
extension Entity {
func getDistancedPosition(x: Float, y: Float, z: Float) -> SIMD3<Float> {
let referenceNodeTransform = transform.matrix
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.x = x
translationMatrix.columns.3.y = y
translationMatrix.columns.3.z = z
let updatedTransform = matrix_multiply(referenceNodeTransform,
translationMatrix)
return .init(updatedTransform.columns.3.x,
updatedTransform.columns.3.y,
updatedTransform.columns.3.z)
}
}
To get left and right for your box, Use below code:
let side1Position = door.getDistancedPosition(x: 0, y: 0, z: self.viewModel.doorDepth/2)
let side2Position = door.getDistancedPosition(x: 0, y: 0, z: -(self.viewModel.doorDepth/2))
To make the box look like the line you must have used depth. If not then you can change the parameter accordingly. e.g. door.getDistancedPosition(x: -0.1, y: 0, z: 0)
You can also refer to this question and its accepted answer:
Position a SceneKit object in front of SCNCamera's current orientation
I am using this function to animate a rotation of a SCNNode:
let rotateNode = SCNAction.rotateTo(x: 0.0, y: CGFloat((headingAngle )), z: 0.0, duration: TimeInterval(1.1), usesShortestUnitArc: true)
node.runAction(rotateNode)
It works really well for different directions except for when I need to rotate it towards the user (or the camera for that matter).
My question is how to direct the SCNNode to move towards the user/camera and how to calculate the headingAngle in order to rotate the SCNNode to face the user/camera when it is moving.
I perform the moving by using
let impulseVector = SCNVector3(
x: 0.0 ,
y: 5.0,
z: 0.0
)
node.physicsBody?.applyForce(impulseVector, at: positionOnNodeToApplyForceTo, asImpulse: true) // propell
And I am aware that the headingAngle needs to be calculated using atan2 function, but for some reason I do not manage to properly direct the SCNNode to move towards the camera nor to rotate and face the camera.
I'm building a UIPanGestureRecognizer so I can move nodes in 3D space.
Currently, I have something that works, but only when the camera is exactly perpendicular to the plane, my UIPanGestureRecognizer looks like this:
#objc func handlePan(_ sender:UIPanGestureRecognizer) {
let projectedOrigin = self.sceneView!.projectPoint(SCNVector3Zero)
let viewCenter = CGPoint(
x: self.view!.bounds.midX,
y: self.view!.bounds.midY
)
let touchlocation = sender.translation(in: self.view!)
let moveLoc = CGPoint(
x: CGFloat(touchlocation.x + viewCenter.x),
y: CGFloat(touchlocation.y + viewCenter.y)
)
let touchVector = SCNVector3(x: Float(moveLoc.x), y: Float(moveLoc.y), z: Float(projectedOrigin.z))
let worldPoint = self.sceneView!.unprojectPoint(touchVector)
let loc = SCNVector3( x: worldPoint.x, y: 0, z: worldPoint.z )
worldHandle?.position = loc
}
The problem happens when the camera is rotated, and the coordinates are effected by the perspective change. Here is you can see the touch position drifting:
Related SO post for which I used to get to this position:
How to use iOS (Swift) SceneKit SCNSceneRenderer unprojectPoint properly
It referenced these great slides: http://www.terathon.com/gdc07_lengyel.pdf
The tricky part of going from 2D touch position to 3D space is obviously the z-coordinate. Instead of trying to convert the touch position to an imaginary 3D space, map the 2D touch to a 2D plane in that 3D space using a hittest. Especially when movement is required only in two direction, for example like chess pieces on a board, this approach works very well. Regardless of the orientation of the plane and the camera settings (as long as the camera doesn't look at the plane from the side obviously) this will map the touch position to a 3D position directly under the finger of the touch and follow consistently.
I modified the Game template from Xcode with an example.
https://github.com/Xartec/PrecisePan/
The main parts are:
the pan gesture code:
// retrieve the SCNView
let scnView = self.view as! SCNView
// check what nodes are tapped
let p = gestureRecognize.location(in: scnView)
let hitResults = scnView.hitTest(p, options: [SCNHitTestOption.searchMode: 1, SCNHitTestOption.ignoreHiddenNodes: false])
if hitResults.count > 0 {
// check if the XZPlane is in the hitresults
for result in hitResults {
if result.node.name == "XZPlane" {
//NSLog("Local Coordinates on XZPlane %f, %f, %f", result.localCoordinates.x, result.localCoordinates.y, result.localCoordinates.z)
//NSLog("World Coordinates on XZPlane %f, %f, %f", result.worldCoordinates.x, result.worldCoordinates.y, result.worldCoordinates.z)
ship.position = result.worldCoordinates
ship.position.y += 1.5
return;
}
}
}
The addition of a XZ plane node in viewDidload:
let XZPlaneGeo = SCNPlane(width: 100, height: 100)
let XZPlaneNode = SCNNode(geometry: XZPlaneGeo)
XZPlaneNode.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "grid")
XZPlaneNode.name = "XZPlane"
XZPlaneNode.rotation = SCNVector4(-1, 0, 0, Float.pi / 2)
//XZPlaneNode.isHidden = true
scene.rootNode.addChildNode(XZPlaneNode)
Uncomment the isHidden line to hide the helper plane and it will still work. The plane obviously needs to be large enough to fill the screen or at least the portion where the user is allowed to pan.
By setting a global var to hold a startWorldPosition of the pan (in state .began) and comparing it to the hit worldPosition in the state .change you can determine the delta/translation in world space and translate other objects accordingly.
In ARKit/SceneKit, when the user taps the button, I want to apply an impulse to my node. I want the impulse to come from the current user's perspective. This means the node would be moving away from the user's perspective. I'm able to get the current orientation/direction, thanks to this code:
func getUserVector() -> (SCNVector3, SCNVector3) { // (direction, position)
if let frame = self.sceneView.session.currentFrame {
let mat = SCNMatrix4(frame.camera.transform) // 4x4 transform matrix describing camera in world space
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33) // orientation of camera in world space
let pos = SCNVector3(mat.m41, mat.m42, mat.m43) // location of camera in world space
return (dir, pos)
}
return (SCNVector3(0, 0, -1), SCNVector3(0, 0, -0.2))
}
via https://github.com/farice/ARShooter/blob/master/ARViewer/ViewController.swift#L191
I have an arbitrary SCNVector, that I've created. It contains info on how high (Y axis), how much to the left or right, and how much forward to apply to the node.
I want to convert/translate my SCNVector3 to come from the orientation/direction of the camera.
Meaning, I have
let (direction, position) = self.getUserVector()
let force = SCNVector3(x: 1.67, y: 13.83, z: -18.3)
How do I apply the force from the location/origin of the direction?
Figured it out after lots of googling. To convert the impulse vector3 to the direction I need, I used something like this:
let original = SCNVector3(x: 1.67, y: 13.83, z: -18.3)
let force = simd_make_float4(original.x, original.y, original.z, 0)
let rotatedForce = simd_mul(currentFrame.camera.transform, force)
let vectorForce = SCNVector3(x:rotatedForce.x, y:rotatedForce.y, z:rotatedForce.z)
node.physicsBody?.applyForce(vectorForce, asImpulse: true)
I want to use Xcode UI tests with the Fastlane Snapshot to make screenshots of the Cordova app. Basically, as my entire app is just a web view, all the Xcode UI test helper methods become irrelevant, and I just want to tap on specific points, e.g. tap(x: 10, y: 10) should produce a tap at the point {10px; 10px}.
That's probably very simple, but I can't figure out how to do it.
Thanks.
You can tap a specific point with the XCUICoordinate API. Unfortunately you can't just say "tap 10,10" referencing a pixel coordinate. You will need to create the coordinate with a relative offset to an actual view.
We can use the mentioned web view to interact with the relative coordinate.
let app = XCUIApplication()
let webView = app.webViews.element
let coordinate = webView.coordinateWithNormalizedOffset(CGVector(dx: 10, dy: 10))
coordinate.tap()
Side note, but have you tried interacting with the web view directly? I've had a lot of success using app.links["Link title"].tap() or app.staticTexts["A different link title"].tap(). Here's a demo app I put together demonstrating interacting with a web view.
Update: As Michal W. pointed out in the comments, you can now tap a coordinate directly, without worrying about normalizing the offset.
let normalized = webView.coordinate(withNormalizedOffset: CGVector(dx: 0, dy: 0))
let coordinate = normalized.withOffset(CGVector(dx: 10, dy: 10))
coordinate.tap()
Notice that I pass 0,0 to the normalized vector and then the actual point, 10,10, to the second call.
#joe To go a little further off of Joe Masilotti's approach I put mine in an extensionand gave prepositional phrases to the global and local params.
func tapCoordinate(at xCoordinate: Double, and yCoordinate: Double) {
let normalized = app.coordinate(withNormalizedOffset: CGVector(dx: 0, dy: 0))
let coordinate = normalized.withOffset(CGVector(dx: xCoordinate, dy: yCoordinate))
coordinate.tap()
}
By giving the global an identifiable name I can easily understand the instance for example:
tapCoordinate(at x: 100, and y: 200)
I found Laser's answer to work fine with Xcode 11, but made a few tweaks to easily integrate it into my testing.
extension XCUIApplication {
func tapCoordinate(at point: CGPoint) {
let normalized = coordinate(withNormalizedOffset: .zero)
let offset = CGVector(dx: point.x, dy: point.y)
let coordinate = normalized.withOffset(offset)
coordinate.tap()
}
}
Now, when I need to tap on a given location, I just provide a CGPoint and call this against my XCUIApplication like so:
let point = CGPoint(x: xCoord, y: yCoord)
app.tapCoordinate(at: point)
<something>.coordinate(withNormalizedOffset: CGVector.zero).withOffset(CGVector(dx:10,dy:60)).tap()
Pass .zero to the normalized vector and then the actual point (10,60)