Will ARKit work better in the latest iOS devices? - ios

I am trying to find out if the accuracy, plane detection and World Tracking of the ARKit will be better in iPhone 8 Plus and iPhone X comparing to an iPhone 7.
I googled it and I read thru this webpage.
There is no indication of dual cameras, no explanation of specs of the camera and if the processor power or better cameras in lates devices will make ARKit more accurate (read here).
I am working on an accuracy-related arkit app and I'd like to know more about this topic.

ARKit doesn't use the dual camera, so there's no functional difference between an iPhone (pick a number) Plus or iPhone X and other devices.
Apple's marketing claims that iPhone 8 / 8 Plus and iPhone X are factory calibrated for better / more precise AR, but makes no definition of baseline vs improved precision to measure by.
That's about all Apple's said or is publicly known.
Beyond that, it's probably safe to assume that even if there's no difference to the world tracking algorithms and camera / motion sensor inputs to those algorithms, the increased CPU / GPU performance of A11 either gives your app more overhead to spend its performance budget on spiffy visual effects or lets you do more with AR before running the battery down. So you can say "better" about newer vs older devices, in general, but not necessarily in a way that has anything to do with algorithmic accuracy.
There's a tiny bit of room for ARKit to be "better" on, say, iPhone X or 8 Plus vs iPhone 8, and especially on iPad vs iPhone — and again it's more about performance differences than functional differences. Among devices of the same processor generation (A10, A11, etc), those devices with larger batteries and larger physical enclosures are less thermally constrained — so the combination of ARKit plus your rendering engine and game/app code will get to spend more time pushing the silicon at full speed than it will on a smaller device.

Related

What sensors does ARCore use?

What sensors does ARCore use: single camera, dual-camera, IMU, etc. in a compatible phone?
Also, is ARCore dynamic enough to still work if a sensor is not available by switching to a less accurate version of itself?
Updated: May 10, 2022.
About ARCore and ARKit sensors
Google's ARCore, as well as Apple's ARKit, use a similar set of sensors to track a real-world environment. ARCore can use a single RGB camera along with IMU, what is a combination of an accelerometer, magnetometer and a gyroscope. Your phone runs world tracking at 60fps, while Inertial Measurement Unit operates at 1000Hz. Also, there is one more sensor that can be used in ARCore – iToF camera for scene reconstruction (Apple's name is LiDAR). ARCore 1.25 supports Raw Depth API and Full Depth API.
Read what Google says about it about COM method, built on Camera + IMU:
Concurrent Odometry and Mapping – An electronic device tracks its motion in an environment while building a three-dimensional visual representation of the environment that is used for fixing a drift in the tracked motion.
Here's Google US15595617 Patent: System and method for concurrent odometry and mapping.
in 2014...2017 Google tended towards Multicam + DepthCam config (Tango project)
in 2018...2020 Google tended to SingleCam + IMU config
in 2021 Google returned to Multicam + DepthCam config
We all know that the biggest problem for Android devices is a calibration. iOS devices don't have this issue ('cause Apple controls its own hardware and software). A low quality of calibration leads to errors in 3D tracking, hence all your virtual 3D objects might "float" in a poorly-tracked scene. In case you use a phone without iToF sensor, there's no miraculous button against bad tracking (and you can't switch to a less accurate version of tracking). The only solution in such a situation is to re-track your scene from scratch. However, a quality of tracking is much higher when your device is equipped with ToF camera.
Here are four main rules for good tracking results (if you have no ToF camera):
Track your scene not too fast, not too slow
Track appropriate surfaces and objects
Use well lit environment when tracking
Don't track reflected of refracted objects
Horizontal planes are more reliable than vertical ones
SingleCam config vs MultiCam config
The one of the biggest problems of ARCore (that's ARKit problem too) is an Energy Impact. We understand that the higher frame rate is – the better tracking's results are. But the Energy Impact at 30 fps is HIGH and at 60 fps it's VERY HIGH. Such an energy impact will quickly drain your smartphone's battery (due to an enormous burden on CPU/GPU). So, just imagine that you use 2 cameras for ARCore – your phone must process 2 image sequences at 60 fps in parallel as well as process and store feature points and AR anchors, and at the same time, a phone must simultaneously render animated 3D graphics with Hi-Res textures at 60 fps. That's too much for your CPU/GPU. In such a case, a battery will be dead in 30 minutes and will be as hot as a boiler)). It seems users don't like it because this is not-good AR experience.

ARKit Version 1.5 can detect Vertical surface?

If I am not wrong ARKit does not support vertical surface detection.
But
According to https://developer.apple.com/news/
and
https://developer.apple.com/news/?id=01242018b
OS 11 is the biggest AR platform in the world, allowing you to create unparalleled augmented reality experiences for hundreds of millions of iOS users. Now you can build even more immersive experiences by taking advantage of the latest features of ARKit, available in iOS 11.3 beta. With improved scene understanding, your app can see and place virtual objects on vertical surfaces, and more accurately map irregularly shaped surfaces.
Does it mean that version 1.5 can able to detect vertical surface too. ?

Hardware differences for ARKit?

Are there any differences in what's available for ARKit depending on hardware? Is iPhone 8 a more potent AR platform than older models in terms of pure functionality?
The face tracking features in ARKit (ARFaceTrackingConfiguration) require a front-facing TrueDepth camera. As yet there’s only one device with that — iPhone X — so the face tracking performance and feature set is the same on all devices that support it.
World tracking (with the back-facing camera) is available on any iOS device with an A9 processor or better. As of this writing that spans three years’ worth of iPhone and iPad hardware releases. Once you meet the bar for supporting ARKit (world / orientation tracking) at all, though, there are no feature set differences between devices.
Depending on what you do with an ARKit session — what technology you use to render overlay content and how complex such content is — you’ll probably find differences in performance across major device generations.
Yes, ARKit runs on the Apple A9, A10, and A11 processors. In device only it will work on an iPhone 6S or iPhone SE, or newer.

What accuracy and precision can I expect from the iphone magnetometer?

What accuracy and precision can I expect from the iphone magnetometer? That is, how close is the reported result to the "true" bearing and what uncertainty is typically reported?
The iPhone 4S, 5 and 5C contain good quality components and the compass readings are generally accurate to +/-1 degree – better than can be achieved with a traditional hand-held Brunton or Silva compass. Unfortunately the magnetometer/gyroscope/accelerometer combination in the iPhone 5S is of lower quality and the compass is only accurate to +/-5 degrees.
This is of course assuming that the magnetometer has been calibrated, although the phone will typically prompt you to perform the calibration is the software determines the sensor input to be unreliable.
Source

What is improved in iPhone 6 Face Detection

Apple's description of the iPhone 6 includes this text
Improved face detection
The iSight camera now recognizes faces faster
and more accurately — even those farther away or in a crowd — for
better portraits and group shots. And it improves blink and smile
detection and selection of faces in burst mode to automatically
capture your best shots.
I've used iOS face detection before, both from Core Graphics (docs) and AVFoundation (docs). I see no mention in those libraries of improvements for iPhone 6, or elsewhere in Apple literature.
My app is showing near-identical detection times for both libraries running on an iPhone 6.
Is there any documentation on this "improved face detection"? Specifically, is this updates to the existing libraries or something new that should be included?

Resources