Do I need anchors to create an ARKit / RealityKit app? - augmented-reality

The ARKit and RealityKit tutorials I have found all deal with anchors. However, there are VR apps that do not place any objects on surfaces. Instead, they just take the location and orientation of the device to display objects that are far away from the phone:
Star Chart shows stars, planets and the sun at their apparent location.
Peakfinder shows the mountains which are currently visible.
Both these apps do not need any real-world anchors. They just take the camera's location and orientation, and then render a model.
Can I create a similar app with ARKit or RealityKit, or is this a use case beyond these two frameworks?

It depends on what you need – AR or VR app. Generally speaking, you definitely need anchors for AR app, and don't need or do need anchors for VR (RealityKit supports anchoring from scratch, but SceneKit doesn't support anchoring).
If you need a comprehensive info about ARKit and RealityKit anchors – read this post.
Using RealityKit framework you can easily create both VR and AR apps (like games, visualisations and scientific apps). In case you place a 3D models in VR scene you tether these models (like mentioned distant stars, or mountains) with AnchorEntity(.world) anchors. In case you place 3D models in AR scene you tether a model with any of the following anchors' types: .world, .image, .face, .plane, .body, etc.
Using pure SceneKit framework you can create just VR apps. SceneKit doesn't have any anchors under its hood. But if you're using SceneKit with ARKit you have to create AR apps with all the corresponding anchors that ARKit has. This post will tell you about RealityKit/SceneKit differences. In addition to the above I should say that ARKit can't render VR and AR scenes, ARKit's divine purpose is world tracking and scene understanding.

Related

Difference between ARKit and Azure Spatial Anchors anchors

I am working on creating an AR application on iOS which can define some anchor points with some annotations and save them in the cloud. Later I want to retrieve these anchor points using any device (iOS or Android) and show them in an ARView. I know we can define anchors just using ARKit. Also we can use Azure Spatial Anchors.
My feeling is since I am using the anchors across platforms I should use Azure Spatial Anchors. But I want to know what are the exact differences between these 2 types of anchors. And is it possible to use just ARKit anchors and accurately rendered on Android devices as well? Simply, I want to know what is the best solution according to my scenario.
Azure Spatial Anchors (ASA) is actually using ARKit on iOS. ARKit makes it easy for you to implement an AR app for your iOS device. The same goes for ARCore for Android devices.
ASA allows you to bridge the two AR worlds. You can create an Anchor on one device (iOS, Android or HoloLens) and retrieve the anchor on another.
The GIF on the Share spatial anchors across sessions and devices page shows what's possible, and is what you appear to be looking for.

How to turn off/on AR in an augmented reality app using ARkit?

I'm starting to learn how to use ARkit and I would like to add a button like the one in the Pokemon go application where you can switch between AR ON (with a model into the real world) and AR OFF (without using a camera, having just the 3D model with a fixed background). Are there any easy way to do it?
Another good example of what you're asking about is the AR Quick Look feature in iOS 12 (see WWDC video or this article): when you quick look a USDZ file you get a generic white-background preview where you can spin the object around with touch gestures, and you can seamlessly switch back and forth between that and a real-world AR camera view.
You've asked about ARKit but not said anything about which renderer you're using. Remember, ARKit itself only tells you about the real world and provides live camera imagery, but it's up to you to display that image and whatever 3D overlay content you want — either by using a 3D graphics framework like SceneKit, Unity, or Unreal, or by creating your own renderer with Metal. So the rest of this answer is renderer-agnostic.
There are two main differences between an AR view and a non-AR 3D view of the same content:
An AR view displays the live camera feed in the background; a non-AR view doesn't.
3D graphics frameworks typically involve some notion of a virtual camera that determines your view of the 3D scene — by moving the camera, you change what part of the scene you see and what angle you see it from. In AR, the virtual camera is made to match the movement of the real device.
Hence, to switch between AR and non-AR 3D views of the same content, you just need to manipulate those differences in whatever way your renderer allows:
Hide the live camera feed. If your renderer lets you directly turn it off, do that. Otherwise you can put some foreground content in front of it, like an opaque skybox and/or a plane for your 3D models to rest on.
Directly control the camera yourself and/or provide touch/gesture controls for the user to manipulate the camera. If your renderer supports multiple cameras in the scene and choosing which one is currently used for rendering, you can keep and switch between the ARKit-managed camera and your own.

How to Visualize zPositions in iOS

My team and I are working on a SpriteKit based iOS game of medium complexity. There are lots of layers and nodes to the design of the game and the zPositioning of the nodes has gotten sloppy. One task I have agreed to take on is the revamping of our zPosition strategy: moving to constants instead of magic numbers, having a holistic zPosition scheme for the app, etc. but first I want to analyze where we are at now. So here is my question:
I vaguely recall watching a WWDC video (or some other tutorial, maybe) in which the person showed using some aspect of Instruments (or some other tool) to show a 3D rendering of an app, seen from an isomorphic angle, based on the zPosition of the SKNodes (or UIKit elements?) in the app.
Does anyone here know what tool this is? And if not, what is the best way to visualize the current state of zPositions in a SpriteKit based app? Thanks!

Reproduce Google Heart with iOS7 MapKit's custom tiles

I would love to reproduce GoogleHeart-like 3D map flyover even when offline.
As of iOS 7 MapKit allows us to draw custom offline tiles. It also allows us to set a Camera in order to see the map in 3D or 2.5D as you may wish to call it.
I was wondering: can I draw a 3D shape like Apple does for its flyover feature, on my custom tiles?
I need to apply a "bump-map" to the map in order to get a GoogleHeart-like 3D view and I was wondering if Apple would allow me to do just that with iOS 7 and custom tiles rendering + camera settings.
Thanks
I have experimented pretty extensively with this, but there is no supported way to do this. Right now, Apple only offers raster tile-based overlay, albeit with automatic 2.5/3D transformation when overlaid on a map. Hopefully in the future they will support 3D API and/or custom, say, OpenGL-based augmentation to the map.

Good 3D , OpenGL engine to work with .dae (Collada) objects in iOS

I've been using NinevehGL 3D engine in my present project, but it no longer support iOS 6, iOS 7 (I cant take screenshot,in iOS 7 device it simply disappear ).
The project specifically is:-
"It is 3D sofa builder. I have .dae files of each part of sofa. 3D engine just selects particular part of sofa (.dae) objects and attach it with sofa. User could change colors or shades of sofa as well also use can rotate it into 3D space "
So i have done bit of research and found Unity 3D engine. I learned that it is compatible with .dae objects but its expensive also it is for creating complex games rather then simple 3D interface which required in my project (description given above).
I checked Cocoa 3D also, not sure it support .dae files?
if somebody has experience using other engines or these above ones and suggest me, it would be much appreciated.

Resources