What is improved in iPhone 6 Face Detection - ios

Apple's description of the iPhone 6 includes this text
Improved face detection
The iSight camera now recognizes faces faster
and more accurately — even those farther away or in a crowd — for
better portraits and group shots. And it improves blink and smile
detection and selection of faces in burst mode to automatically
capture your best shots.
I've used iOS face detection before, both from Core Graphics (docs) and AVFoundation (docs). I see no mention in those libraries of improvements for iPhone 6, or elsewhere in Apple literature.
My app is showing near-identical detection times for both libraries running on an iPhone 6.
Is there any documentation on this "improved face detection"? Specifically, is this updates to the existing libraries or something new that should be included?

Related

ARKit Version 1.5 can detect Vertical surface?

If I am not wrong ARKit does not support vertical surface detection.
But
According to https://developer.apple.com/news/
and
https://developer.apple.com/news/?id=01242018b
OS 11 is the biggest AR platform in the world, allowing you to create unparalleled augmented reality experiences for hundreds of millions of iOS users. Now you can build even more immersive experiences by taking advantage of the latest features of ARKit, available in iOS 11.3 beta. With improved scene understanding, your app can see and place virtual objects on vertical surfaces, and more accurately map irregularly shaped surfaces.
Does it mean that version 1.5 can able to detect vertical surface too. ?

Hardware differences for ARKit?

Are there any differences in what's available for ARKit depending on hardware? Is iPhone 8 a more potent AR platform than older models in terms of pure functionality?
The face tracking features in ARKit (ARFaceTrackingConfiguration) require a front-facing TrueDepth camera. As yet there’s only one device with that — iPhone X — so the face tracking performance and feature set is the same on all devices that support it.
World tracking (with the back-facing camera) is available on any iOS device with an A9 processor or better. As of this writing that spans three years’ worth of iPhone and iPad hardware releases. Once you meet the bar for supporting ARKit (world / orientation tracking) at all, though, there are no feature set differences between devices.
Depending on what you do with an ARKit session — what technology you use to render overlay content and how complex such content is — you’ll probably find differences in performance across major device generations.
Yes, ARKit runs on the Apple A9, A10, and A11 processors. In device only it will work on an iPhone 6S or iPhone SE, or newer.

How to use the iPhone X faceID data

Is it possible to use iphone X faceID data to create a 3D model of the user face? If yes, can you please give tell me where should I look? I was not reallw able to found something related to this. I found a video on the WWDC about true depth and ARKit but I am not sure that it would help.
Edit:
I just watched a WWDC video and its says that ARKit provides a detailed 3D geometry face. Do you think it's precise enough to create a 3D representation of a person face? Maybe combined with an image? Any idea?
Yes and no.
Yes, there are APIs for getting depth maps captured with the TrueDepth camera, for face tracking and modeling, and for using Face ID to authenticate in your own app:
You implement Face ID support using the LocalAuthentication framework. It's the same API you use for Touch ID support on other devices — you don't get any access to the internals of how the authentication works or the biometric data involved, just a simple yes-or-no answer about whether the user passed authentication.
For simple depth map capture with photos and video, see AVFoundation > Cameras and Media Capture, or the WWDC17 session on such — everything about capturing depth with the iPhone 7 Plus dual back camera also applies to the iPhone X and 8 Plus dual back camera, and to the front TrueDepth camera on iPhone X.
For face tracking and modeling, see ARKit, specifically ARFaceTrackingConfiguration and related API. There's sample code showing the various basic things you can do here, as well as the Face Tracking with ARKit video you found.
Yes, indeed, you can create a 3D representation of a user's face with ARKit. The wireframe you see in that video is exactly that, and is provided by ARKit. With ARKit's SceneKit integration you can easily display that model, add textures to it, add other 3D content anchored to it, etc. ARKit also provides another form of face modeling called blend shapes — this is the more abstract representation of facial parameters, tracking 50 or so muscle movements, that gets used for driving avatar characters like Animoji.
All of this works with a generalized face model, so there's not really anything in there about identifying a specific user's face (and you're forbidden from trying to use it that way in the App Store — see §3.3.52 "If your application accesses face data..." in the developer program license agreement).
No, Apple provides no access to the data or analysis used for enrolling or authenticating Face ID. Gaze tracking / attention detection and whatever parts of Apple's face modeling have to do with identifying a unique user's face aren't parts of the SDK Apple provides.

Reproduce the new scanning feature in iOS 11 Notes

Does anyone know how to reproduce the new Notes new scanning feature in iOS 11??
Is AVFoundation used for the camera?
How is the camera detecting the shape of the paper/document/card?
How do they place the overlay over in real time?
How does the camera know when to take the photo?
What's that animated overlay and how can we achieve this?
Does anyone know how to reproduce this?
Not exactly :P
Is AVFoundation used for the camera? Yes
How is the camera detecting the shape of the paper/document/card?
They are using the Vision Framework to do rectangle detection.
It's stated in this WWDC session by one of the demonstrators
How do they place the overlay over in real time?
You Should check out the above video for this as he talks about doing something similar in one of the demos
How does the camera know when to take the photo?
I'm not familiar with this app but it's surely triggered in the capture session, no?
Whats that animated overlay and how can we achieve this?
Not sure about this but I'd imagine it's some kind of CALayer with animation
Is Tesseract framework used for the image afterwards?
Isn't Tesseract OCR for text?
If you're looking for handwriting recognition, you might want to look for a MNIST model
Use Apple’s rectangle detection SDK, which provides an easy-to-use API that can identify rectangles in still images or video sequences in near-realtime. The algorithm works very well in simple scenes with a single prominent rectangle in a clean background, but is less accurate in more complicated scenes, such as capturing small receipts or business cards in cluttered backgrounds, which are essential use-cases for our scanning feature.
An image processor that identifies notable features (such as faces and barcodes) in a still image or video.
https://developer.apple.com/documentation/coreimage/cidetector

Will ARKit work better in the latest iOS devices?

I am trying to find out if the accuracy, plane detection and World Tracking of the ARKit will be better in iPhone 8 Plus and iPhone X comparing to an iPhone 7.
I googled it and I read thru this webpage.
There is no indication of dual cameras, no explanation of specs of the camera and if the processor power or better cameras in lates devices will make ARKit more accurate (read here).
I am working on an accuracy-related arkit app and I'd like to know more about this topic.
ARKit doesn't use the dual camera, so there's no functional difference between an iPhone (pick a number) Plus or iPhone X and other devices.
Apple's marketing claims that iPhone 8 / 8 Plus and iPhone X are factory calibrated for better / more precise AR, but makes no definition of baseline vs improved precision to measure by.
That's about all Apple's said or is publicly known.
Beyond that, it's probably safe to assume that even if there's no difference to the world tracking algorithms and camera / motion sensor inputs to those algorithms, the increased CPU / GPU performance of A11 either gives your app more overhead to spend its performance budget on spiffy visual effects or lets you do more with AR before running the battery down. So you can say "better" about newer vs older devices, in general, but not necessarily in a way that has anything to do with algorithmic accuracy.
There's a tiny bit of room for ARKit to be "better" on, say, iPhone X or 8 Plus vs iPhone 8, and especially on iPad vs iPhone — and again it's more about performance differences than functional differences. Among devices of the same processor generation (A10, A11, etc), those devices with larger batteries and larger physical enclosures are less thermally constrained — so the combination of ARKit plus your rendering engine and game/app code will get to spend more time pushing the silicon at full speed than it will on a smaller device.

Resources