Orient text to always face the camera with ARkit & RealityKit - ios

Ok so I'm currently developing an AR app using ARkit + Realitykit, and I struggle with a basic feature: I want to display some text near my 3D object, but that text needs to always face the camera to be readable. I couldn't find any solution to display pure 2D text so I decided I would display a 3D text mesh and orient it toward the camera using a subscription, but I can't manage to make it work.
Here's the subscription responsible for the orientation change:
var labelSubscription: Cancellable!
//...
labelSubscription = arView.scene.subscribe(to: SceneEvents.Update.self) { (_) in
labelEntity.look(at: arView.cameraTransform.translation, from: labelEntity.position(relativeTo: nil), upVector: DOWN, relativeTo: nil)
print("update triggered")
}
But this doesn't do anything and the print statement is never reached.
Also, even when trying to just call labelEntity.look after instantiating my entity (without a subscription to just orient at the start) doesn't seem to do anything.
How to make that work ? And is there a more convenient feature for displaying 2D text in my AR view? Tysm :)
PS: I'm new to swift in general so not sure of what types to put in the closure and what design patterns to use here. Any good learning material for ARkit would be appreciated.
EDIT: Here's a gist with the full code if needed https://gist.github.com/nohehf/b8ef8d83cc0f0f68abafba454668a779
EDIT2: Setting a timer to call the look function periodically allowed me to see that it's wrong too, so I have two issues: My look function doesn't do what I expect (the orientation of the text is wrong, see image below), and the scene subscription is never called.
Here the text should be facing the camera

Related

Detecting a real world object using ARKit with iOS

I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.

Placing objects below the ground in AR Quick Look on iOS

I am working on a project that will display objects below the ground using AR Quick Look. However, the AR mode seems to bring everything above the ground based on the bounding box of the objects in the scene.
I have tried using the USDZ directly and composing a simple scene in Reality Composer with the object or with a simple cube with the exact same result. AR preview mode in Reality Composer is showing the object below the ground or below an image anchor correctly. However, if I export the scene as a .reality file and open it in using AR Quick Look, it brings the object above the ground as well.
Is there a way to achieve showing an object below the detected horizontal plane or image (horizontal) using AR Quick Look?
This is still an issue a year later. I have submitted feedback to Apple. I suggest you do too. I have suggested adding a checkbox to keep Y axis persistent. My assumption is this behaves this way to prevent the object from colliding with the ground, but I don't think it's necessary. It's just a limitation right now.

Different methods of displaying camera under SceneKit

I'm developing a AR Application which can use few different engines. One of them is based on SceneKit (not ARKit).
I used to make SceneView background transparent, and just display AVCaptureVideoPreviewLayer under it. But this have created a problem later - turns out, that if you use clear backgroundColor for SceneView, and then add a floor node to it, which has diffuse.contents = UIColor.clear (transparent floor), then shadows are not displaying on it. And the goal for now is to have shadows in this engine.
I think the best method of getting shadows to work is to set camera preview as SCNScene.background.contents. For this I tried using AVCaptureDevice.default(for: video). This worked, but it has one issue - you can't use video format that you want - SceneKit automatically changes format when it's assigned. I even asked Apple for help using one of two help requests you can send to them, but they replied, that for now there is no public api that would allow me to use this with the format I would like. And on iPhone 6s this format changes to 30 FPS, and I need it to be 60 FPS. So this option is no good.
Is there some other way I would assign camera preview to scene background property? From what I read I can use also CALayer for this property, so I tried assigning AVCaptureVideoPreviewLayer, but this resulted in black color only, and no video. I have updated frame of layer to correct size, but this didn't work anyway. Maybe I did something wrong, and there is a way to use this AVCaptureVideoPreviewLayer or something else?
Can you suggest some possible solutions? I know I could use ARKit, and I do for other engine, but for this particular one I need to keep using SceneKit.

Split Screen 2 Player Local Multiplayer with SpriteKit

I want to make a 2 player mode, split screen style, like Tiny Wings HD did where each side of an iPad gets a flipped orientation screen of the current Level.
I wanted to also implement it on tvOS (without the flipped orientation) as I feel TV begs for this sort of gameplay as it's pretty classic to have this style of gameplay on TV (e.g. Mario Kart 64 or Goldeneye).
Over on the Apple Developer forum, someone suggested that it could be done as follows, but, there we're no other responses.
"You can have two views attached to the main window (add a subview in your viewcontroller). To both views you can present a copy of the scene. Then you can exchange game data between scenes via singletons."
I was looking for a more in-depth explanation as I don't exactly understand what the answer is saying.
I'd just like to be able to have two cameras both rendering the same scene but one focusing on player 1 and the other player 2.
Obviously this isn't a simple answer, so I don't expect a full in-depth tutorial.
Unfortunately I could find no info on this.
Has anyone tried this?
A sample project would be ideal or some documentation/links that might help.
I'm sure a demonstration of this would be valuable to quite a lot of people.
No Cameras involved or necessary
The players just look like they're moving along the x axis because the backgrounds are scrolling by. You can allow the players to move up & down on the y axis whether jumping, ducking, rolling or following a path like in Tiny Wings, but the player never leaves their x position. You can even have each half of the screen background scrolling at different speeds to represent that one player is moving faster than the other.
In your update method in you scene file you can scroll your backgrounds, and in your touches methods you can jump, duck etc the players
Instead of using an SKView to present an SKScene, you can use SKRenderer and MTKView. SKRenderer renders a scene into a Metal pipeline, which in turn can be presented by an MTView.
Crucially, you can decide if SKRenderer updates the scene, allowing you to render the same scene frame multiple times (possibly using different cameras).
So a pipeline might look like this:
Apple actually talk about this option in Choosing a SpriteKit Scene Renderer. There's also a section about using SKRenderer in Going Beyond 2D with SpriteKit from WWDC17 which is quite helpful. This answer also shows how to use SKRenderer (albeit in Objective-C).

How can I add and transform images using React Native for Apple Watch?

With the HTML5 canvas, one can transform() elements (http://www.w3schools.com/tags/canvas_transform.asp) for the general purpose multitool, and convenience methods let you do helpful subsets of a generic transform, like rotate(), scale(), and translate().
XY question variant of what I ask: How can I treat an Apple Watch's display like an HTML5 canvas?
Better way of asking: I want to display images like clock hands and a back face (yes, I know this might not get into the app store) that involves repeated transforms, or positioning and then rotation around a point. What should I be reading / doing?
Thanks,

Resources