I am working on creating an AR application on iOS which can define some anchor points with some annotations and save them in the cloud. Later I want to retrieve these anchor points using any device (iOS or Android) and show them in an ARView. I know we can define anchors just using ARKit. Also we can use Azure Spatial Anchors.
My feeling is since I am using the anchors across platforms I should use Azure Spatial Anchors. But I want to know what are the exact differences between these 2 types of anchors. And is it possible to use just ARKit anchors and accurately rendered on Android devices as well? Simply, I want to know what is the best solution according to my scenario.
Azure Spatial Anchors (ASA) is actually using ARKit on iOS. ARKit makes it easy for you to implement an AR app for your iOS device. The same goes for ARCore for Android devices.
ASA allows you to bridge the two AR worlds. You can create an Anchor on one device (iOS, Android or HoloLens) and retrieve the anchor on another.
The GIF on the Share spatial anchors across sessions and devices page shows what's possible, and is what you appear to be looking for.
Related
Is any posible solutions to make usdz files be tracked on walls in AR quick look?
Looks that model-viwer doesn't support ios vertical tracking.
Yes, you can use model viewer library, for place in wall augmented reality.
https://modelviewer.dev/examples/augmentedreality/#wall
You can use the mglb for android and you can use usdz for iOS (https://modelviewer.dev/docs/#entrydocs-augmentedreality-attributes-iosSrc)
I’m working on creating a marker based AR game using AFrame 1.2.0 and ar.js 3.3.3. The display shows 2D images of animals that the user has to “find”. The whole game functions well now, but I was running into an issue of photos appearing distorted or warped. I figured out that the issue is the marker’s plane is not being read correctly by mobile devices. The pictures below include a red cube to show the issue better. The top one is on a PC’s webcam and shows correctly the box is mounted to the marker. The bottom one shows the box is not mounted to the marker.
I figure that the issue is either mobile device’s gyroscope features or that the screen dimensions are affecting the aspect ratio of the screen.
I’ve tried a few properties on Aframe’s a-entity, such as look-controls=‘Enabled:false’ and look-controls=‘magicWindowTrackingEnabled: false’. Neither of those made a difference. I haven’t found properties within ar.js to use. Just wondering if anyone has come across this issue and found a fix.
images planing correctly with the marker
images not planing correctly
arjs comes in two different, mutually exclusive builds - Image + location based tracking, and marker tracking (link).
Importing the wrong one may/will cause incorrect behavior like the one you experience.
The ARKit and RealityKit tutorials I have found all deal with anchors. However, there are VR apps that do not place any objects on surfaces. Instead, they just take the location and orientation of the device to display objects that are far away from the phone:
Star Chart shows stars, planets and the sun at their apparent location.
Peakfinder shows the mountains which are currently visible.
Both these apps do not need any real-world anchors. They just take the camera's location and orientation, and then render a model.
Can I create a similar app with ARKit or RealityKit, or is this a use case beyond these two frameworks?
It depends on what you need – AR or VR app. Generally speaking, you definitely need anchors for AR app, and don't need or do need anchors for VR (RealityKit supports anchoring from scratch, but SceneKit doesn't support anchoring).
If you need a comprehensive info about ARKit and RealityKit anchors – read this post.
Using RealityKit framework you can easily create both VR and AR apps (like games, visualisations and scientific apps). In case you place a 3D models in VR scene you tether these models (like mentioned distant stars, or mountains) with AnchorEntity(.world) anchors. In case you place 3D models in AR scene you tether a model with any of the following anchors' types: .world, .image, .face, .plane, .body, etc.
Using pure SceneKit framework you can create just VR apps. SceneKit doesn't have any anchors under its hood. But if you're using SceneKit with ARKit you have to create AR apps with all the corresponding anchors that ARKit has. This post will tell you about RealityKit/SceneKit differences. In addition to the above I should say that ARKit can't render VR and AR scenes, ARKit's divine purpose is world tracking and scene understanding.
If most of the devices are not supported ARCore, then why is Pokemon Go running on every device?
My device is not supported by ARCore but Pokemon Go is on it with full performance.
Why?
Until October 2017, Pokemon Go appears to use a Niantic made AR engine. At a high level, the game placed the Pokemon globally in space at a server defined location (the spawn point). The AR engine used the phone’s GPS and compass to determine if the phone should be moved to the left or to the right. Eventually, the phone pointed to the right heading and the AR engine drawed the 3D model over the video coming from the camera. At that time there was no attempt to perform mapping of the environment, surface recognition, etc. That was a simple, yet very effective technique which created the stunning effects we’ve all seen.
After that Niantic has shown prototypes of Pokemon GO using ARKit for iOS. It is easy to notice enhancements: missed pokeballs appear to bounce very naturally on the sidewalk and respect physics, it feels like Pikachu naturally walks on the sidewalk as opposed to floating in the air with the currently released game. Most observers expected Niantic to replace the current engine with ARKit (iOS) and ARCore (Android), possibly via Unity 3D AR APIs.
In early 2018 Niantic improved the aspect of the game on Android by adding support for ARCore, Google’s augmented reality SDK. And a similar update to what we’ve already seen on iOS 11, which was updated to support ARKit. The iOS update gave the virtual monsters a much greater sense of presence in the world, due to camera tracking, allowing them to more accurately stand on real-world surfaces rather than floating in the center of the frame. Android users will need a phone compatible with ARCore in order to use the new “AR+” mode.
Prior to AR+, Pokémon Go would use rough approximations of where objects were to try and place the Pokémon in your environment, but it was mostly a clunky workaround that functioned mostly as a novelty feature. The new AR+ mode also lets iOS users take advantage of a new capture bonus, called expert handler, that involves sneaking up close to a Pokémon, so as not to scare it away, in order to more easily capture it. With ARKit, since it’s designed to use the camera with the gyroscope and all the sensors, it actually feeds in 60 fps with full resolution. It’s a lot more performant and it actually uses less battery than the original AR mode.
For iOS users there's a standard list of supported devices:
iPhone 6s and higher
iPad 2017 and higher
For Android users not everything is clear. Let's see why. Even if you have an officially unsupported device with poor-calibrated sensors you can still use ARCore on your phone. For example, ARCore for All allows you do it. So for Niantic, as well, there's no difficulties to make every Android phone suitable for Pokemon Go.
Hope this helps.
I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.