I have following code on tap gesture on sceneview
let location = gesture.location(in: self.sceneView)
let hitTestScene = self.sceneView.hitTest(location, options:nil)
if let first = hitTestScene.first {
}
I am able to get node from first
What I want is Suppose I have wall node which is SCNPlane or SCNBox with very big height.
Now If User tap on particular location let say on half of node. I want to that point in that node.
So the question is With sceneView.hitTest I can get node which tapped but I want the location in node. like where that exactly tapped.
So I can measure height from origin to tapped location
Any Suggestion will be appreciated
Okay I found the solution.
We can get position to the hit node with SCNHitTestResult's localCoordinates property
let location = self.centerImage.center
let hitTestScene = self.sceneView.hitTest(location, options:nil)
if let first = hitTestScene.first {
print("localCoordinates in tapped ndoe is" first.localCoordinates )
}
Hope it is helpful to someone
Related
I'm working a simple app using ARKit where a user can tap their screen and place a node (SCNNode) on a given location. I want the user to be able to place nodes that stay in place no matter where the camera is so that when they pan back to the location where they placed the node, it's still there.
I've gotten the tap functionality to work, but I've noticed that when I physically move my device along the x-axis, the placed node moves along with it. I've tried to anchor the nodes to something other than the root node, but it hasn't worked as expected. I tried to look up documentation on how the root node is placed and if it's calculated based on the camera which would explain why the nodes are moving along with the camera, but no luck there either.
Here's the code for placing the nodes. The node position is placed using scenePoint which is a projection from the touch location to the scene that was done using SceneKit: unprojectPoint returns same/similar point no matter where you touch screen.
let nodeImg = SCNNode(geometry: SCNSphere(radius: 0.05))
nodeImg.physicsBody? = .static()
nodeImg.geometry?.materials.first?.diffuse.contents = hexColor
nodeImg.geometry?.materials.first?.specular.contents = UIColor.white
nodeImg.position = SCNVector3(scenePoint.x, scenePoint.y, scenePoint.z)
print(nodeImg.position)
sceneView.scene.rootNode.addChildNode(nodeImg)
I think this has something to do with the fact that I'm adding the nodeImg node as a child to the rootNode, but I'm not sure what else to anchor it to.
On tap you need to set the 'worldPosition' of the node and not just 'position'
Check this link : ARKit: position vs worldposition vs simdposition
Assuming you have set sceneView.allowsCameraControl = false
#objc func handleTapGesture(withGestureRecognizer recognizer: UITapGestureRecognizer) {
let location: CGPoint = recognizer.location(in: self.sceneView)
let hits = self.sceneView.hitTest(location, options: nil)
if let tappednode = hits.first?.node {
nodeImg.worldPosition = tappednode.worldPosition
self.sceneView.scene.rootNode.addChildNode(nodeImg)
}
}
I started out with the template project which you get when you choose ARKit project. As you run the app you can see the ship and view it from any angle.
However, once I allow camera control and tap on the screen or zoom into the ship through panning the ship gets stuck to camera. Now wherever I go with the camera the ship is stuck to the screen.
I went through the Apple Guide and seems like the don't really consider this as unexpected behavior as there is nothing about this behavior.
How to keep the position of the ship fixed after I zoom it or touch the screen?
Well, looks like allowsCameraControl is not the answer at all. It's good for SceneKit but not for ARKit(maybe it's good for something in AR but I'm not aware of it yet).
In order to zoom into the view a UIPinchGestureRecognizer is required.
// 1. Find the touch location
// 2. Perform a hit test
// 3. From the results take the first result
// 4. Take the node from that first result and change the scale
#objc private func handlePan(recognizer: UIPinchGestureRecognizer) {
if recognizer.state == .changed {
// 1.
let location = recognizer.location(in: sceneView)
// 2.
let hitTestResults = sceneView.hitTest(location, options: nil)
// 3.
if let hitTest = hitTestResults.first {
let shipNode = hitTest.node
let newScaleX = Float(recognizer.scale) * shipNode.scale.x
let newScaleY = Float(recognizer.scale) * shipNode.scale.y
let newScaleZ = Float(recognizer.scale) * shipNode.scale.z
// 4.
shipNode.scale = SCNVector3(newScaleX, newScaleY, newScaleZ)
recognizer.scale = 1
}
}
Regarding #2. I got confused a little with another hitTest method called hitTest(_:types:)
Note from documentation
This method searches for AR anchors and real-world objects detected by
the AR session, not SceneKit content displayed in the view. To search
for SceneKit objects, use the view's hitTest(_:options:) method
instead.
So that method cannot be used if you want to scale a node which is a SceneKit content
I am working on an AR based application using ARKit. I am using https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality as base for this. Using this i am able to move or rotate the whole Virtual Object.
Now there are lot of child nodes in the Virtual Object. I want to drag/move any child node with user finger irrespective of the axis. The child SCNode may be in ground or floating. I want to move the object wherever the user finger goes irrespective of the axis or irrespective of the euler angles of the child node. Is this even possible?
I followed the below links but it is just moving along a particular axis.
ARKit - Drag a node along a specific axis (not on a plane)
Dragging SCNNode in ARKit Using SceneKit
I tried using the below code and it is not at all helping,
let tapPoint: CGPoint = gesture.location(in: sceneView)
let result = sceneView.hitTest(tapPoint, options: nil)
if result.count == 0 {
return
}
let scnHitResult: SCNHitTestResult? = result.first
movedObject = scnHitResult?.node //.parent?.parent
let hitResults = self.sceneView.hitTest(tapPoint, types: .existingPlane)
if !hitResults.isEmpty{
guard let hitResult = hitResults.last else { return }
movedObject?.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}
I am creating a simple app with ARKit in which I add some text to the scene to the tapped position:
#objc func tapped(sender: UITapGestureRecognizer){
let sceneView = sender.view as! ARSCNView
let tapLocation = sender.location(in: sceneView)
let hitTest = sceneView.hitTest(tapLocation, types: .featurePoint)
if !hitTest.isEmpty{
self.addTag(tag: "A", hitTestResult: hitTest.first!)
}
else{
print("no match")
}
}
func addTag(tag: String, hitTestResult: ARHitTestResult){
let tag = SCNText(string:tag, extrusionDepth: 0.1)
tag.font = UIFont(name: "Optima", size: 1)
tag.firstMaterial?.diffuse.contents = UIColor.red
let tagNode = SCNNode(geometry: tag)
let transform = hitTestResult.worldTransform
let thirdColumn = transform.columns.3
tagNode.position = SCNVector3(thirdColumn.x,thirdColumn.y - tagNode.boundingBox.max.y / 2,thirdColumn.z)
print("\(thirdColumn.x) \(thirdColumn.y) \(thirdColumn.z)")
self.sceneView.scene.rootNode.addChildNode(tagNode)
}
It works, but I have problem with the orientation of the text. When I add it with the camera's original position, the text orientation is ok, I can see the text frontwise (Sample 1). But when I turn camera to the left / right, and add the text by tapping, I can see the added text from the side (Sample 2).
Sample 1:
Sample 2:
I know there should be some simple trick to solve it, but as a beginner in this topic I could not find it so far.
You want the text to always face the camera? SCNBillboardConstraint is your friend:
tagNode.constraints = [SCNBillboardConstraint()]
Am I correct in saying that you want the text to face the camera when you tap (wherever you happen to be facing), but then remain stationary?
There are a number of ways of adjusting the orientation of any node. For this case I would suggest simply setting the eulerAngles of the text node to be equal to those of the camera, at the point in which you instantiate the text.
In your addTag() function you add:
let eulerAngles = self.sceneView.session.currentFrame?.camera.eulerAngles
tagNode.eulerAngles = SCNVector3(eulerAngles.x, eulerAngles.y, eulerAngles.z + .pi / 2)
The additional .pi / 2 is there to ensure the text is in the correct orientation, as the default with ARKit is for a landscape orientation and therefore the text comes out funny. This applies a rotation around the local z axis.
It's also plausible (and some may argue it's better) to use .localRotate() of the node, or to access its transform property, however I like the approach of manipulating both the position and eulerAngles directly.
Hope this helps.
EDIT: replaced Float(1.57) with .pi / 2.
I have been trying to work out if a tap gesture is in an overlay polygon, to no avail.
I am trying to make a map of country overlays - on clicking on an overlay I want to be able to tell what country the overlay is of.
First I found this: Detecting touches on MKOverlay in iOS7 (MKOverlayRenderer) and this: detect if a point is inside a MKPolygon overlay which suggest either:
make a tiny rectangle around your touch point and see if it intersects any overlays.
```
//point clicked
let point = MKMapPointForCoordinate(newCoordinates)
//make a rectangle around this click
let mapRect = MKMapRectMake(point.x, point.y, 0,0);
//loop through the polygons on the map and
for polygon in worldMap.overlays as! [MKPolygon] {
if polygon.intersectsMapRect(mapRect) {
print("found intersection")
}
}
```
Using viewForOverlay with a promising sounding function CGPathContainsPoint, however viewForOverlay is now deprecated.
This led me to find Detecting a point in a MKPolygon broke with iOS7 (CGPathContainsPoint) which suggests the following method:
Make a mutable polygon from the points of each overlay(instead of using the deprecated viewForOverlay) and then use CGPathContainsPoint to return if the clicked point is in the overlay.
However I am unable to make this code work.
```
func overlaySelected (gestureRecognizer: UIGestureRecognizer) {
let pointTapped = gestureRecognizer.locationInView(worldMap)
let newCoordinates = worldMap.convertPoint(pointTapped, toCoordinateFromView: worldMap)
let mapPointAsCGP = CGPointMake(CGFloat(newCoordinates.latitude), CGFloat(newCoordinates.longitude));
print(mapPointAsCGP.x, mapPointAsCGP.y)
for overlay: MKOverlay in worldMap.overlays {
if (overlay is MKPolygon) {
let polygon: MKPolygon = (overlay as! MKPolygon)
let mpr: CGMutablePathRef = CGPathCreateMutable()
for p in 0..<polygon.pointCount {
let mp = polygon.points()[p]
print(polygon.coordinate)
if p == 0 {
CGPathMoveToPoint(mpr, nil, CGFloat(mp.x), CGFloat(mp.y))
}
else {
CGPathAddLineToPoint(mpr, nil, CGFloat(mp.x), CGFloat(mp.y))
}
}
if CGPathContainsPoint(mpr, nil, mapPointAsCGP, false) {
print("------ is inside! ------")
}
}
}
}
```
The first method works but no matter how small I try and make the height and width of the rectangle around the click point let mapRect = MKMapRectMake(point.x, point.y, 0.00000000001,0.00000000001); the accuracy of the tap is not reliable and so you can end up clicking on several polygons at once.
Currently I am working on deciding on which county is nearer to the tap by using the 'MKPolygon' property coordinate - which gives the central point of the polygon. With this one can then measure the distance from this polygon to the tapped point to find the closest one. But this is not ideal as the user may never be able to tap on the country that they intend.
So, to sum up my questions:
Is there something that I am not implementing correctly in the second method above (one using CGPathContainsPoint)?
Is there a more accurate way to register an on click event with the rectangle method?
Any other suggestions or pointers on how to achieve my goal of clicking the map and seeing if the click is on an overlay.