Adding text to screen space with reailtykit, swift and swiftui - ios

How to add text to screen space in RealityKit/SwiftUI?
All the tutorials online are about adding text to screen space with UIKit/Scene Kit, but it never uses SwiftUI/RealityKit. Even Apple's uses UIKit for most of their examples!
Apple's example of screen space
It's not a straightforward conversion from Apple's explain into SwiftUI/RealityKit. Apple would get the 2d screen point, but then update the UIText's view frame with that 2d screen point.
To put the annotation in the right place on the screen, ask the ARView
to convert its entity’s world location to a 2D screen point.
guard let projectedPoint = arView.project(note.position) else { return }
All I have right now is a tap handler that would raycast to the tap location. From that I have world coordinates, which I use for my anchor position. I created a mesh of text, but when adding to the anchor, it exists in world space and isn't aligned properly
let raycastResults = self.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .any)
guard let raycastFirstResult: ARRaycastResult = raycastRes.first else { return }
let position = raycastFirstResult.worldTransform
let mesh = MeshResource.generateText("Test")
Has anyone added text to world space using SwiftUI/RealityKit before?

Related

Align 3D object parallel to vertical plane detected by estametedVerticalPlane

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.

ARKit - Object stuck to camera after tap on screen

I started out with the template project which you get when you choose ARKit project. As you run the app you can see the ship and view it from any angle.
However, once I allow camera control and tap on the screen or zoom into the ship through panning the ship gets stuck to camera. Now wherever I go with the camera the ship is stuck to the screen.
I went through the Apple Guide and seems like the don't really consider this as unexpected behavior as there is nothing about this behavior.
How to keep the position of the ship fixed after I zoom it or touch the screen?
Well, looks like allowsCameraControl is not the answer at all. It's good for SceneKit but not for ARKit(maybe it's good for something in AR but I'm not aware of it yet).
In order to zoom into the view a UIPinchGestureRecognizer is required.
// 1. Find the touch location
// 2. Perform a hit test
// 3. From the results take the first result
// 4. Take the node from that first result and change the scale
#objc private func handlePan(recognizer: UIPinchGestureRecognizer) {
if recognizer.state == .changed {
// 1.
let location = recognizer.location(in: sceneView)
// 2.
let hitTestResults = sceneView.hitTest(location, options: nil)
// 3.
if let hitTest = hitTestResults.first {
let shipNode = hitTest.node
let newScaleX = Float(recognizer.scale) * shipNode.scale.x
let newScaleY = Float(recognizer.scale) * shipNode.scale.y
let newScaleZ = Float(recognizer.scale) * shipNode.scale.z
// 4.
shipNode.scale = SCNVector3(newScaleX, newScaleY, newScaleZ)
recognizer.scale = 1
}
}
Regarding #2. I got confused a little with another hitTest method called hitTest(_:types:)
Note from documentation
This method searches for AR anchors and real-world objects detected by
the AR session, not SceneKit content displayed in the view. To search
for SceneKit objects, use the view's hitTest(_:options:) method
instead.
So that method cannot be used if you want to scale a node which is a SceneKit content

How can I set text orientation in ARKit?

I am creating a simple app with ARKit in which I add some text to the scene to the tapped position:
#objc func tapped(sender: UITapGestureRecognizer){
let sceneView = sender.view as! ARSCNView
let tapLocation = sender.location(in: sceneView)
let hitTest = sceneView.hitTest(tapLocation, types: .featurePoint)
if !hitTest.isEmpty{
self.addTag(tag: "A", hitTestResult: hitTest.first!)
}
else{
print("no match")
}
}
func addTag(tag: String, hitTestResult: ARHitTestResult){
let tag = SCNText(string:tag, extrusionDepth: 0.1)
tag.font = UIFont(name: "Optima", size: 1)
tag.firstMaterial?.diffuse.contents = UIColor.red
let tagNode = SCNNode(geometry: tag)
let transform = hitTestResult.worldTransform
let thirdColumn = transform.columns.3
tagNode.position = SCNVector3(thirdColumn.x,thirdColumn.y - tagNode.boundingBox.max.y / 2,thirdColumn.z)
print("\(thirdColumn.x) \(thirdColumn.y) \(thirdColumn.z)")
self.sceneView.scene.rootNode.addChildNode(tagNode)
}
It works, but I have problem with the orientation of the text. When I add it with the camera's original position, the text orientation is ok, I can see the text frontwise (Sample 1). But when I turn camera to the left / right, and add the text by tapping, I can see the added text from the side (Sample 2).
Sample 1:
Sample 2:
I know there should be some simple trick to solve it, but as a beginner in this topic I could not find it so far.
You want the text to always face the camera? SCNBillboardConstraint is your friend:
tagNode.constraints = [SCNBillboardConstraint()]
Am I correct in saying that you want the text to face the camera when you tap (wherever you happen to be facing), but then remain stationary?
There are a number of ways of adjusting the orientation of any node. For this case I would suggest simply setting the eulerAngles of the text node to be equal to those of the camera, at the point in which you instantiate the text.
In your addTag() function you add:
let eulerAngles = self.sceneView.session.currentFrame?.camera.eulerAngles
tagNode.eulerAngles = SCNVector3(eulerAngles.x, eulerAngles.y, eulerAngles.z + .pi / 2)
The additional .pi / 2 is there to ensure the text is in the correct orientation, as the default with ARKit is for a landscape orientation and therefore the text comes out funny. This applies a rotation around the local z axis.
It's also plausible (and some may argue it's better) to use .localRotate() of the node, or to access its transform property, however I like the approach of manipulating both the position and eulerAngles directly.
Hope this helps.
EDIT: replaced Float(1.57) with .pi / 2.

Adding custom view to ARKit

I just started looking at ARKitExample from apple and I am still studying. I need to do like interactive guide. For example, we can detect something (like QRCode), in that area, can I show with 1 label ?
Is it possible to add custom view (like may be UIVIew, UIlabel) to surface?
Edit
I saw some example to add line. I will need to find how to add additional view or image.
let mat = SCNMatrix4FromMat4(currentFrame.camera.transform)
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33)
let currentPosition = pointOfView.position + (dir * 0.1)
if button!.isHighlighted {
if let previousPoint = previousPoint {
let line = lineFrom(vector: previousPoint, toVector: currentPosition)
let lineNode = SCNNode(geometry: line)
lineNode.geometry?.firstMaterial?.diffuse.contents = lineColor
sceneView.scene.rootNode.addChildNode(lineNode)
}
}
I think this code should be able to add custom image. But I need to find the whole sample.
func updateRenderer(_ frame: ARFrame){
drawCameraImage(withPixelBuffer:frame.capturedImage)
let viewMatrix = simd_inverse(frame.came.transform)
let prijectionMatrix = frame.camera.prijectionMatrix
updateCamera(viewMatrix, projectionMatrix)
updateLighting(frame.lightEstimate?.ambientIntensity)
drawGeometry(forAnchors: frame.anchors)
}
ARKit isn't a rendering engine — it doesn't display any content for you. ARKit provides information about real-world spaces for use by rendering engines such as SceneKit, Unity, and any custom engine you build (with Metal, etc), so that they can display content that appears to inhabit real-world space. Thus, any "how do I show" question for ARKit is actually a question for whichever rendering engine you use with ARKit.
SceneKit is the easy out-of-the-box, no-additional-software-required way to display 3D content with ARKit, so I presume you're asking about that.
SceneKit can't render a UIView as part of a 3D scene. But it can render planes, cubes, or other shapes, and texture-map 2D content onto them. If you want to draw a text label on a plane detected by ARKit, that's the direction to investigate — follow the example's, um, example to create SCNPlane objects corresponding to detected ARPlaneAnchors, get yourself an image of some text, and set that image as the plane geometry's diffuse contents.
Yes you can add custom view in ARKit Scene.
Just make image of your view and add it wherever you want.
You can use following code to get image for UIView
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}

UIInterpolatingMotionEffect Parallax Effect Swift 2 iOS

I am trying to create a parallax effect such as the iPhone homescreen where the background moves as your tilt your phone. I have so far achieved this, but I have one problem still. After I tilt my phone and the background moves, it very slowly moves back into position of being centered.
It's not the constraints. I removed the center x/y constraint and it still slid back slowly as if it was re-calibrating to the new position. The only other constraint is the ratio. So it's not the constraints.
any ideas?
The code is simple:
let leftRightMin = CGFloat(-50.0)
let leftRightMax = CGFloat(50.0)
let upDownMin = CGFloat(-35.0)
let upDownMax = CGFloat(35.0)
let leftRight = UIInterpolatingMotionEffect(keyPath: "center.x", type:
UIInterpolatingMotionEffectType.TiltAlongHorizontalAxis)
leftRight.minimumRelativeValue = leftRightMin
leftRight.maximumRelativeValue = leftRightMax
let upDown = UIInterpolatingMotionEffect(keyPath: "center.y", type:
UIInterpolatingMotionEffectType.TiltAlongVerticalAxis)
upDown.minimumRelativeValue = upDownMin
upDown.maximumRelativeValue = upDownMax
let fxGroup = UIMotionEffectGroup()
fxGroup.motionEffects = [leftRight, upDown]
backgroundImage.addMotionEffect(fxGroup)
Any ideas why it slowly centers the image back after tilting and how to fix it?
This is the behavior of UIInterpolatingMotionEffect. You'll notice it also happens on the home screen, and everywhere else in the system that the effect is used.
It does this because sometimes the user will move their device in such a way that the content interpolates to the maximum position, but does not return to the center or resting position. This means that the position that was once the maximum interpolating position is now the resting center, so the content must move back to its original position in case the device moves again and the effect can continue.
I did not find any mention of this behavior in the UIInterpolatingMotionEffect documentation, but it can be observed everywhere the system uses the effect.

Resources