I'm trying to detect when the camera is facing my object that I've placed in ARSKView. Here's the code:
override func update(_ currentTime: TimeInterval) {
// Called before each frame is rendered
guard let sceneView = self.view as? ARSKView else {
return
}
if let currentFrame = sceneView.session.currentFrame {
//let cameraZ = currentFrame.camera.transform.columns.3.z
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token", intersects(spriteNode) {
// token is within the camera view
let distance = simd_distance(anchor.transform.columns.3,
currentFrame.camera.transform.columns.3)
//print("DISTANCE BETWEEN CAMERA AND TOKEN: \(distance)")
if distance <= captureDistance {
// token is within the camera view and within capture distance
print("token is within the camera view and within capture distance")
}
}
}
}
}
The problem is that the intersects method is returning true both when the object is directly in front of the camera, as well as directly behind you. How can I update this code so it only detects when the spriteNode is in the current camera viewfinder? I'm using SpriteKit by the way, not SceneKit.
Here's the code I'm using to actually create the anchor:
self.captureDistance = captureDistance
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if sceneView.session.currentFrame != nil {
print("token dropped at \(distance) meters and bearing: \(bearing)")
// Add a new anchor to the session
let transform = getTransformGiven(bearing: bearing, distance: distance)
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
func getTransformGiven(bearing: Float, distance: Float) -> matrix_float4x4 {
let origin = MatrixHelper.translate(x: 0, y: 0, z: Float(distance * -1))
let bearingTransform = MatrixHelper.rotateMatrixAroundY(degrees: bearing * -1, matrix: origin)
return bearingTransform
}
I have spent a while looking at this, and have come to the conclusion that trying to get the distance between the currentFrame.camera and the anchor doesn't work simply because it returns similar values irregardless of whether the anchor is infront of, or behind the camera. By this I mean that if we assume that our anchor is at point x, and we move forwards 1meter or backwards 1 meter, the distance from the camera and the anchor is still 1 meter.
As such after some experimenting I believe we need to look at the following variables and functions to help us detect whether our SKNode is infront of the camera:
(a) The zPosition of the SpriteNode which refers to:
The z-order of the node (used for ordering). Negative z is "into" the screen, Positive z is "out" of the screen
(b) open func intersects(_ node: SKNode) -> Bool which:
Returns true if the bounds of this node intersects with the
transformed bounds of the other node, otherwise false.
As such the following seems to do exactly what you need:
override func update(_ currentTime: TimeInterval) {
//1. Get The Current ARSKView & Current Frame
guard let sceneView = self.view as? ARSKView, let currentFrame = sceneView.session.currentFrame else { return }
//3. Iterate Through Our Anchors & Check For Our Token Node
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token"{
/*
If The ZPosition Of The SpriteNode Is Negative It Can Be Seen As Into The Screen Whereas Positive Is Out Of The Screen
However We Also Need To Know Whether The Actual Frostrum (SKScene) Intersects Our Object
If Our ZPosition Is Negative & The SKScene Doesnt Intersect Our Node Then We Can Assume It Isnt Visible
*/
if spriteNode.zPosition <= 0 && intersects(spriteNode){
print("Infront Of Camera")
}else{
print("Not InFront Of Camera")
}
}
}
}
Hope it helps...
You can also use this function to check the camera's position :-
- (void)session:(ARSession *)session didUpdateFrame:(ARFrame *)frame; {
simd_float4x4 transform = session.currentFrame.camera.transform;
SCNVector3 position = SCNVector3Make(transform.columns[3].x,
transform.columns[3].y,
transform.columns[3].z);
// Call any function to check the Position.
}
I would give you a clue. Check the ZPosition like this.
if let spriteNode = sceneView.node(for: anchor),
spriteNode.name == "token",
intersects(spriteNode) && spriteNode.zPosition < 0 {....}
Related
When adding a text MeshResource, with no angle and with a fixed world position, it looks fine from the camera perspective.
However, when the user walks to the other side of the text entity and turns around, it looks mirrored.
I don't want to use the look(at_) API since I only want to rotate it around the Y-axis 180 degrees and when the user passes it again to reset the angle to 0.
First we have to put text in anchor that will stay in the same orientation even when we rotate text. Then add textIsMirrored variable that will handle rotation when changed:
class TextAnchor: Entity,HasAnchoring {
let textEntity = ModelEntity(mesh: .generateText("text"))
var textIsMirrored = false {
willSet {
if newValue != textIsMirrored {
if newValue == true {
textEntity.setOrientation(.init(angle: .pi, axis: [0,1,0]), relativeTo: self)
} else {
textEntity.setOrientation(.init(angle: 0, axis: [0,1,0]), relativeTo: self)
}
}
}
}
required init() {
super.init()
textEntity.scale = [0.01,0.01,0.01]
anchoring = AnchoringComponent(.plane(.horizontal, classification: .any, minimumBounds: [0.3,0.3]))
addChild(textEntity)
}
}
Then in your ViewController you can create anchor that will have Camera as a target so we can track camera position and create out textAnchor:
let cameraAnchor = AnchorEntity(.camera)
let textAnchor = TextAnchor()
For it to work you have to add it as a child of your scene (preferably in viewDidLoad):
arView.scene.addAnchor(cameraAnchor)
arView.scene.addAnchor(textAnchor)
Now in ARSessionDelegate function you can check camera position in relation to your text and rotate it if Z axis is below 0:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
if cameraAnchor.position(relativeTo: textAnchor).z < 0 {
textAnchor.textIsMirrored = true
} else {
textAnchor.textIsMirrored = false
}
}
I'm currently trying to build an AR Chess app and I'm having trouble getting the movement of the pieces working.
I would like to be able to tap on a chess piece, then the legal moves it can make on the chess board will be highlighted and it will move to whichever square the user tapped on.
Pic of the chess board design and nodes:
https://gyazo.com/2a88f9cda3f127301ed9b4a44f8be047
What I would like to implement:
https://imgur.com/a/IGhUDBW
Would greatly appreciate any suggestions on how to get this working.
Thanks!
ViewController Code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// Add lighting to the scene
sceneView.autoenablesDefaultLighting = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration to track an external image
let configuration = ARImageTrackingConfiguration()
// Image detection
// Reference which group to find the image to detect in the Assets folder e.g. "Detection Card"
if let imageDetect = ARReferenceImage.referenceImages(inGroupNamed: "Detection Card", bundle: Bundle.main) {
// Sets image tracking properties to the image in the referenced group
configuration.trackingImages = imageDetect
// Amount of images to be tracked
configuration.maximumNumberOfTrackedImages = 1
}
// Run the view's session
sceneView.session.run(configuration)
}
// Run when horizontal surface is detected and display 3D object onto image
// ARAnchor - tells a certain point in world space is relevant to your app, makes virtual content appear "attached" to some real-world point of interest
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode {
// Creates 3D object
let obj = SCNNode()
// Check if image detected through camera is an ARImageAnchor - which contains position and orientation data about the image detected in the session
if let imageAnchor = anchor as? ARImageAnchor {
// Set dimensions of the horizontal plane to be displayed onto the image to be the same as the image uploaded
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
// Display mild transparent layer onto detected image
// This is to ensure image detection works by display a faint layer on the image
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.2)
// Set geometry shape of the plane
let planeNode = SCNNode(geometry: plane)
// Flip vertical plane to horizontal plane
planeNode.eulerAngles.x = -Float.pi / 2
obj.addChildNode(planeNode)
// Initialise chess scene
if let chessBoardSCN = SCNScene(named: "art.scnassets/chess.scn") {
// If there is a first in the scene file
if let chessNodes = chessBoardSCN.rootNode.childNodes.first {
// Displays chessboard upright
chessNodes.eulerAngles.x = Float.pi / 2
// Adds chessboard to the overall 3D scene
obj.addChildNode(chessNodes)
}
}
}
return obj
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
}
You will need to add gestures on to your view and use the ARSceneViews hitTest method to detect what the gesture is touching in your scene. You can then update the positions based on the movement from the gestures.
Here is a question that deals with roughly the same requirement of dragging nodes around.
Placing, Dragging and Removing SCNNodes in ARKit
First, you need to add a gesture recognizer for tap into your viewDidLoad, like this:
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(handleTap(_:)))
myScnView.addGestureRecognizer(tapGesture)
Then realize the handler function:
#objc
func handleTap(_ gestureRecognize: UIGestureRecognizer) {
// HERE YOU NEED TO DETECT THE TAP
// check what nodes are tapped
let location = gestureRecognize.location(in: myScnView)
let hitResults = myScnView.hitTest(location, options: [:])
// check that we clicked on at least one object
if hitResults.count > 0 {
// retrieved the first clicked object
let tappedPiece = hitResults[0].node
// HERE YOU CAN SHOW POSSIBLE MOVES
//Ex. showPossibleMoves(for: tappedPiece)
}
}
Now, to show the possible moves, you need to identify all quadrants and your node position on the chessboard.
To do this, you can assign a name or a number, or a combination of letter and number, or moreover a combination of numbers. (I suggest combination of number, like row 1 column 1, like a matrix).
let's take my suggestion, so you can name each quadrant 1.1 1.2 ... 2.1 2.2 and so on.
Now, to detect where your piece is, you can check contact with the PhysicsContactDelegate.
Now you have the tappedPiece and the place where it is, so you have to define the rule for the pieces, for example:
let rules = ["tower":"cross"] //add the others
N.B You can choose what you want to define the rules.
Let's take my suggestion for good, now you should create the function to highlight:
func highlight(quadrant: SCNNode){
quadrant.geometry?.firstMaterial?.emission.contents = UIColor.yellow
}
Finally the showPossibleMoves(for: tappedPiece) could be something this:
func showPossibleMoves(for piece: SCNNode){
let pieceType = piece.name //You have to give the name as you did into your rules variable
//ex. if you have rules like ["tower":"cross"] you have to set all towers name to "tower"
let rule = rules[pieceType]
switch rule{
case "cross":
//you have to highlight all nodes on the right, left, above and bottom
// you can achieve this by selecting the start point and increase it
//assuming you named your quadrants like 1.1 1.2 or 11 12 13 ecc...
let startRow = Int(startQuadrant.name.first)
let startColumn = Int(startQuadrant.name.last)
//Now loop the highlight on right
for column in startColumn+1...MAX_COLUMN-1{
let quadrant = myScnView.scene.rootNode.childNode(withName:"\(startRow).\(column)" , recursively: true)
// call highlight function
highlight(quadrant: quadrant)
}
//Now loop for above quadrants
for row in startRow+1...MAX_ROW-1{
let quadrant = myScnView.scene.rootNode.childNode(withName:"\(row).\(startColumn)" , recursively: true)
// call highlight function
highlight(quadrant: quadrant)
}
//DO THE SAME FOR ALL DIRECTIONS
}
// ADD ALL CASES, like bishop movements "diagonals" and so on
}
NOTE: In the handlerTap function you have to check what you're tapping, for example, to check if you're tapping on a quadrant after selecting a piece (you want to move you're piece) you can check a boolean value and the name of the selected node
//assuming you have set the boolean value after selecting a piece
if pieceSelected && node.name != "tower"{
//HERE YOU CAN MOVE YOUR PIECE
}
I'm creating my anchor and 2d node in ARSKView like so:
func displayToken(distance: Float) {
print("token dropped at: \(distance)")
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if let currentFrame = sceneView.session.currentFrame {
removeToken()
// Create a transform with a translation of x meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -distance
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
if let image = tokenImage {
let texture = SKTexture(image: image)
let tokenImageNode = SKSpriteNode(texture: texture)
tokenImageNode.name = "token"
return tokenImageNode
} else {
return nil
}
}
This places it exactly in front of the camera at a given distance (z). What I want to also do is take a latitude/longitude for an object, calculate an angle or heading in degrees, and initially drop the anchor/node at this angle from the camera. I'm currently getting the heading by using the GMSGeometryHeading method, which takes users current location, and the target location to determine the heading. So when dropping the anchor, I want to put it in the right direction towards the target location's lat/lon. How can I achieve this with SpriteKit/ARKit?
Can you clarify your question a bit please?
Perhaps the following lines can help you as example. There the cameraNode moves using a basic geometry-formula, moving obliquely depending on the angle (in Euler) in both x and y coordinates
var angleEuler = 0.1
let rotateX = Double(cameraNode.presentation.position.x) - sin(degrees(radians: Double(angleEuler)))*10
let rotateZ = Double(cameraNode.presentation.position.z) - abs(cos(degrees(radians: Double(angleEuler))))*10
cameraNode.position = SCNVector3(x:Float(rotateX), y:0, z:Float(rotateX))
If you want an object fall in front of the camera, and the length of the fall depend on a degree just calculate the value of "a" in the geometry-formula "Tan A = a/b" and update the node's "presentation.position.y"
I hope this helps
When application launched first a vertical surface is detected on one wall than camera focus to the second wall, in the second wall another surface is detected. The first wall is now no more visible to the ARCamera but this code is providing me the anchors of the first wall. but I need anchors of the second wall which is right now Visible/focused in camera.
if let anchor = sceneView.session.currentFrame?.anchors.first {
let node = sceneView.node(for: anchor)
addNode(position: SCNVector3Zero, anchorNode: node)
} else {
debugPrint("anchor node is nil")
}
The clue to the answer is in the beginning line of your if let statement.
Lets break this down:
When you say let anchor = sceneView.session.currentFrame?.anchors.first, you are referencing an optional array of ARAnchor, which naturally can have more than one element.
Since your are always calling first e.g. index [0], you will always get the 1st ARAnchor which was added to the array.
Since you now have 2 anchors, you would naturally need the last (latest) element. As such you can try this as a starter:
if let anchor = sceneView.session.currentFrame?.anchors.last {
let node = sceneView.node(for: anchor)
addNode(position: SCNVector3Zero, anchorNode: node)
} else {
debugPrint("anchor node is nil")
}
Update:
Since another poster has interpreted the question differently, in that they believe the question is how can I detect if an ARPlaneAnchor is in view? Let's approach it another way.
First we need to take into consideration that the ARCamera has a Frostrum in which our content is shown:
As such, we would then need to determine whether an ARPlaneAnchor was inViewOfFrostrum.
First we will create 2 variables:
var planesDetected = [ARPlaneAnchor: SCNNode]()
var planeID: Int = 0
The 1st to store the ARPlaneAnchor and its associated SCNNode, and the 2nd in order to provide a unique ID for each plane.
In the ARSCNViewDelegate we can visualise an ARPlaneAnchor and then store it's information e.g:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//1. Get The Current ARPlaneAnchor
guard let anchor = anchor as? ARPlaneAnchor else { return }
//2. Create An SCNode & Geometry To Visualize The Plane
let planeNode = SCNNode()
let planeGeometry = SCNPlane(width: CGFloat(anchor.extent.x), height: CGFloat(anchor.extent.z))
planeGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
planeNode.geometry = planeGeometry
//3. Set The Position Based On The Anchors Extent & Rotate It
planeNode.position = SCNVector3(anchor.center.x, anchor.center.y, anchor.center.z)
planeNode.eulerAngles.x = -.pi / 2
//4. Add The PlaneNode To The Node & Give It A Unique ID
node.addChildNode(planeNode)
planeNode.name = String(planeID)
//5. Store The Anchor & Node
planesDetected[anchor] = planeNode
//6. Increment The Plane ID
planeID += 1
}
Now we have stored the detected planes, we then of course need to determine if any of these are in view of the ARCamera e.g:
/// Detects If An Object Is In View Of The Camera Frostrum
func detectPlaneInFrostrumOfCamera(){
//1. Get The Current Point Of View
if let currentPointOfView = augmentedRealityView.pointOfView{
//2. Loop Through All The Detected Planes
for anchorKey in planesDetected{
let anchor = anchorKey.value
if augmentedRealityView.isNode(anchor, insideFrustumOf: currentPointOfView){
print("ARPlaneAnchor With ID \(anchor.name!) Is In View")
}else{
print("ARPlaneAnchor With ID \(anchor.name!) Is Not In View")
}
}
}
}
Finally we then need to access this function which we could do in the following delegate method for example renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval):
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
detectPlaneInFrostrumOfCamera()
}
Hopefully both of these will point in the right direction...
In order to get the node that is currently is in point of view you can do something like this:
var targettedAnchorNode: SCNNode?
if let anchors = sceneView.session.currentFrame?.anchors {
for anchor in anchors {
if let anchorNode = sceneView.node(for: anchor), let pointOfView = sceneView.pointOfView, sceneView.isNode(anchorNode, insideFrustumOf: pointOfView) {
targettedAnchorNode = anchorNode
break
}
}
if let targettedAnchorNode = targettedAnchorNode {
addNode(position: SCNVector3Zero, anchorNode: targettedAnchorNode)
} else {
debugPrint("Targetted node not found")
}
} else {
debugPrint("Anchors not found")
}
If you would like to get all focused nodes, collect them in an array satisfying specified condition
Good luck!
I created an SCNSphere so now it looks like a planet kind of. This is exactly what I want. My next goal is to allow users to rotate the sphere using a pan gesture recognizer. They are allowed to rotate it around the X or Y axis. I was just wondering how I can do that. This is what I have so far.
origin = sceneView.frame.origin
node.geometry = SCNSphere(radius: 1)
node.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "world.jpg")
let panGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(CategoryViewController.panGlobe(sender:)))
sceneView.addGestureRecognizer(panGestureRecognizer)
func panGlobe(sender: UIPanGestureRecognizer) {
// What should i put inside this method to allow them to rotate the sphere/ball
}
We have a ViewController that contains a node sphereNode that contains our sphere.
To rotate the sphere we could use a UIPanGestureRecognizer.
Since the recognizer reports the total distance our finger has traveled on the screen we cache the last point that was reported to us.
var previousPanPoint: CGPoint?
let pixelToAngleConstant: Float = .pi / 180
func handlePan(_ newPoint: CGPoint) {
if let previousPoint = previousPanPoint {
let dx = Float(newPoint.x - previousPoint.x)
let dy = Float(newPoint.y - previousPoint.y)
rotateUp(by: dy * pixelToAngleConstant)
rotateRight(by: dx * pixelToAngleConstant)
}
previousPanPoint = newPoint
}
We calculate dx and dy with how much pixel our finger has traveled in each direction since we last called the recognizer.
With the pixelToAngleConstant we convert our pixel value in an angle (in randians) to rotate our sphere. Use a bigger constant for a faster rotation.
The gesture recognizer returns a state that we can use to determine if the gesture has started, ended, or the finger has been moved.
When the gesture starts we save the fingers location in previousPanPoint.
When our finger moves we call the function above.
When the gesture is ended or canceled we clear our previousPanPoint.
#objc func handleGesture(_ gestureRecognizer: UIPanGestureRecognizer) {
switch gestureRecognizer.state {
case .began:
previousPanPoint = gestureRecognizer.location(in: view)
case .changed:
handlePan(gestureRecognizer.location(in: view))
default:
previousPanPoint = nil
}
}
How do we rotate our sphere?
The functions rotateUp and rotateRight just call our more general function, rotate(by: around:) which accepts not only the angle but also the axis to rotate around.
rotateUp rotates around the x-axis, rotateRight around the y-axis.
func rotateUp(by angle: Float) {
let axis = SCNVector3(1, 0, 0) // x-axis
rotate(by: angle, around: axis)
}
func rotateRight(by angle: Float) {
let axis = SCNVector3(0, 1, 0) // y-axis
rotate(by: angle, around: axis)
}
The rotate(by:around:) is in this case relative simple because we assume that the node is not translated/ we want to rotate around the origin of the nodes local coordinate system.
Everything is a little more complicated when we look at a general case but this answer is only a small starting point.
func rotate(by angle: Float, around axis: SCNVector3) {
let transform = SCNMatrix4MakeRotation(angle, axis.x, axis.y, axis.z)
sphereNode.transform = SCNMatrix4Mult(sphereNode.transform, transform)
}
We create a rotation matrix from the angle and the axis and multiply the old transform of our sphere with the calculated one to get the new transform.
This is the little demo I created:
This approach has two major downsides.
It only rotates around the nodes coordinate origin and only works properly if the node's position is SCNVector3Zero
It does takes neither the speed of the gesture into account nor does the sphere continue to rotate when the gesture stops.
An effect similar to a table view where you can flip your finger and the table view scrolls fast and then slows down can't be easily achieved with this approach.
One solution would be to use the physics system for that.
Below is what I tried, not sure whether it is accurate with respect to angles but...it sufficed most of my needs....
#objc func handleGesture(_ gestureRecognizer: UIPanGestureRecognizer) {
let translation = gestureRecognizer.translation(in: gestureRecognizer.view!)
let x = Float(translation.x)
let y = Float(-translation.y)
let anglePan = (sqrt(pow(x,2)+pow(y,2)))*(Float)(Double.pi)/180.0
var rotationVector = SCNVector4()
rotationVector.x = x
rotationVector.y = y
rotationVector.z = 0.0
rotationVector.w = anglePan
self.earthNode.rotation = rotationVector
}
Sample Github-EarthRotate