why ARCore Supported device Limited? - augmented-reality

what makes the ARCore supported device supports ARCore?
Which Features Makes This Device Support ArCore?
What is difference between ARCore Device And Other non Supported Device?

What happens is not about how new the mobile is, but if this mobile had some tests and mesures when it was design and build.
What it means, you cellphone today need some hardware like:
Accelerometer: measures acceleration, which is change in speed divided by time. Simply put, it’s the measure of change in velocity. Acceleration forces can be static/continuous—like gravity—or dynamic, such as movement or vibrations.
Gyroscope: measures and/or maintains orientation and angular velocity. When you change the rotation of your phone while using an AR experience, the gyroscope measures that rotation and ARCore ensures that the digital assets respond correctly.
Phone Camera: with mobile AR, your phone camera supplies a live feed of the surrounding real world upon which AR content is overlaid. In addition to the camera itself, ARCore-capable phones like the Google Pixel rely on complementary technologies like machine learning, complex image processing, and computer vision to produce high-quality images and spatial maps for mobile AR.
Magnetometer: gives smartphones a simple orientation related to the Earth's magnetic field. Because of the magnetometer, your phone always knows which direction is North, allowing it to auto-rotate digital maps depending on your physical orientation. This device is key to location-based AR apps.
GPS: a global navigation satellite system that provides geolocation and time information to a GPS receiver, like in your smartphone. For ARCore-capable smartphones, this device helps enable location-based AR apps.
Display: the display on your smartphone is important for crisp imagery and displaying 3D rendered assets. For instance, Google Pixel XL’s display specification is 5.5" AMOLED QHD (2560 x 1440) 534ppi display, which means that the phone can display 534 pixels per inch—making for rich, vivid images.
You can find this information and more on the Introduction to Augmented Reality and ARCore by Google AR & VR

In order to get AR experience, ARCore-compatible device must have 4 sensors:
Accelerometer
Gyroscope
Back RGB Camera
Magnetometer (it'll be supported in the future)
If your smartphone doesn't have a gyroscope (or it has a virtual gyroscope) that measures an orientation (rotation) in space – your Android device is considered as ARCore-incompatible. Every ARCore-compatible device must fulfil 3D tracking using 6 degrees of freedom (XYZ position and XYZ rotation).
When you're developing AR apps for Google Play Store you should write a code that checks that ARCore compatibility.
How to perform runtime checks.
Check whether ARCore is supported (for AR Optional apps only).
AR Optional apps can use ArCoreApk.checkAvailability() to determine if the current device supports ARCore. On devices that do not support ARCore, AR Optional apps should disable AR related functionality and hide associated UI elements:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Enable AR related functionality on ARCore supported devices only.
maybeEnableArButton();
}
void maybeEnableArButton() {
ArCoreApk.Availability availability = ArCoreApk.getInstance().checkAvailability(this);
if (availability.isTransient()) {
// Re-query at 5Hz while compatibility is checked in the background.
new Handler().postDelayed(new Runnable() {
#Override
public void run() {
maybeEnableArButton();
}
}, 200);
}
if (availability.isSupported()) {
mArButton.setVisibility(View.VISIBLE);
mArButton.setEnabled(true);
// indicator on the button.
} else { // Unsupported or unknown.
mArButton.setVisibility(View.INVISIBLE);
mArButton.setEnabled(false);
}
}

AR core requires devices with 6 Degrees of freedom , unsupported devices usually have 3 degrees of freedom (DOF).

Related

Selecting a specific camera through Qt when multiple 'lenses' are present?

I have a Qt5.15 QWidgets application (targeted primarily for use on iOS). I would like the user to be able to select a specific rear-facing camera of a phone that has multiple rear-facing cameras of different focal lengths. (e.g.- the iPhone 12 has three rear-facing cameras: wide (0.5x), normal (1x) and telephoto (2.5x)). Qt Quick solutions are also welcome.
const QList<QCameraInfo> cameras = QCameraInfo::availableCameras(); Only returns two cameras on the iPhone (one front, one rear) even though there are really four (one front, three rear).
QCameraFocus::zoomTo can increase the digital zoom of the rear facing camera, but changing the optical zoom has no affect. QCameraFocus::maximumOpticalZoom returns 1.0 for the rear camera as well.
How can I use any of the rear-facing cameras other than the default with Qt?

ios ARKit 3 with iPad Pro 2020, how to use front camera data with back camera tracking?

The ARKit API supports simultaneous world and face tracking via the back and front cameras, but unfortunately due to hardware limitations, the new iPad Pro 2020 is unable to use this feature (probably because the LIDAR camera takes a lot more power). This is a bit of a step back.
Here is an updated reference in the example project:
guard ARWorldTrackingConfiguration.supportsUserFaceTracking else {
fatalError("This sample code requires
iOS 13 / iPad OS 13, and an iOS device with
a front TrueDepth camera. Note: 2020 iPads
do not support user face-tracking while world tracking.")
}
There is also a forum conversation proving that this is an unintentional hardware flaw.
It looks like the mobile technology is not "there yet" for both. However, for my use case I just wanted to be able to switch between front and back tracking modes seamlessly, without needing to reconfigure the tracking space. For example, I would like a button to toggle between "now you track and see my face" mode and "world tracking" mode.
There are 2 cases: it's possible or it's impossible, but maybe there are some alternative approaches depending on that.
Is it possible, or would switching AR tracking modes necessitate setting-up the tracking space again? If so, how would it be achieved?
If it's impossible:
Even if I don't get face-tracking during world-tracking, is there a way to get a front-facing camera feed that I can use with the Vision framework, for example?
Specifically: how do I enable back-facing tracking and get front and back facing camera feeds simultaneously, and disable one or the other selectively? If it's possible even without front-facing tracking and only the basic feed, this will work.

Creating singular view in Unity using GoogleVR ( GoogleCardboard ) for iOS

I am a college student attempting to build a VR application for iOS using Unity paired with GoogleVR sdk (Google Cardboard). I can get my app to run on an iPad, but the display on the screen is through two viewports (or cameras, not sure the correct terminology) for two eyes.
While this may be contradictory to the idea of VR, I actually only want a single central camera's perspective and for that display to fill the whole screen.
I've been searching through the Unity project files and the google Cardboard files, but haven't found a way to do this. Is there a simple way to turn off the two eye display and instead do a single view? If so, what file would I modify?
Thanks!
The main things that the Cardboard SDK give you on iOS is stereoscopic rendering, control of camera rotation based on the gyroscope, and the gaze pointer. If you don't want stereoscopic rendering, you can disable VR support in XR Settings and use some simple replacements for the other two items. You can add a regular camera to your scene and then use a script like this to set its rotation based on the phone's gyroscope:
using UnityEngine;
class SceneManager : MonoBehaviour {
void Start() {
// Enable the gyro so that it can be used to control the camera rotation.
Input.gyro.enabled = true;
}
void Update() {
// Update the camera rotation based on the gyroscope.
Camera.main.transform.Rotate(
-Input.gyro.rotationRateUnbiased.x,
-Input.gyro.rotationRateUnbiased.y,
Input.gyro.rotationRateUnbiased.z
);
}
}
To replace the gaze pointer, you can use Unity's standalone input module to route screen touch events through the input system (e.g. to trigger scripts that implement IPointerClickHandler).

Why is Pokemon Go running on unsupported devices?

If most of the devices are not supported ARCore, then why is Pokemon Go running on every device?
My device is not supported by ARCore but Pokemon Go is on it with full performance.
Why?
Until October 2017, Pokemon Go appears to use a Niantic made AR engine. At a high level, the game placed the Pokemon globally in space at a server defined location (the spawn point). The AR engine used the phone’s GPS and compass to determine if the phone should be moved to the left or to the right. Eventually, the phone pointed to the right heading and the AR engine drawed the 3D model over the video coming from the camera. At that time there was no attempt to perform mapping of the environment, surface recognition, etc. That was a simple, yet very effective technique which created the stunning effects we’ve all seen.
After that Niantic has shown prototypes of Pokemon GO using ARKit for iOS. It is easy to notice enhancements: missed pokeballs appear to bounce very naturally on the sidewalk and respect physics, it feels like Pikachu naturally walks on the sidewalk as opposed to floating in the air with the currently released game. Most observers expected Niantic to replace the current engine with ARKit (iOS) and ARCore (Android), possibly via Unity 3D AR APIs.
In early 2018 Niantic improved the aspect of the game on Android by adding support for ARCore, Google’s augmented reality SDK. And a similar update to what we’ve already seen on iOS 11, which was updated to support ARKit. The iOS update gave the virtual monsters a much greater sense of presence in the world, due to camera tracking, allowing them to more accurately stand on real-world surfaces rather than floating in the center of the frame. Android users will need a phone compatible with ARCore in order to use the new “AR+” mode.
Prior to AR+, Pokémon Go would use rough approximations of where objects were to try and place the Pokémon in your environment, but it was mostly a clunky workaround that functioned mostly as a novelty feature. The new AR+ mode also lets iOS users take advantage of a new capture bonus, called expert handler, that involves sneaking up close to a Pokémon, so as not to scare it away, in order to more easily capture it. With ARKit, since it’s designed to use the camera with the gyroscope and all the sensors, it actually feeds in 60 fps with full resolution. It’s a lot more performant and it actually uses less battery than the original AR mode.
For iOS users there's a standard list of supported devices:
iPhone 6s and higher
iPad 2017 and higher
For Android users not everything is clear. Let's see why. Even if you have an officially unsupported device with poor-calibrated sensors you can still use ARCore on your phone. For example, ARCore for All allows you do it. So for Niantic, as well, there's no difficulties to make every Android phone suitable for Pokemon Go.
Hope this helps.

Detect if iPhone thrown in air using core motion

I'm trying to detect if my iPhone has been thrown into the air.I've tried using core motion's acceleration API and its altitude API.However, because the axes are fixed to the phone doing the detection of the changes is incredibly difficult.Is there a better way to do what I want?Is it possible to speed up the refresh rate of the CMAltitude API?
In freefall, you should see your 3 accelerometer values go to 0. Even in a projectile type of fall (throwing), the phone is in freefall as soon as it leaves the thrower's hand.
This white paper talks about using a MCU, but the concept is there.
http://www.nxp.com/files/sensors/doc/app_note/AN3151.pdF

Resources