How can we access Front Facing Camera Images with ARCamera or ARSCNView and is it possible to record ARSCNView just like Camera Recording?
Regarding the front-facing camera: in short, no.
ARKit offers two basic kinds of AR experience:
World Tracking (ARWorldTrackingConfiguration), using the back-facing camera, where a user looks "through" the device at an augmented view of the world around them. (There's also AROrientationTrackingConfiguration, which is a reduced quality version of world tracking, so it still uses only the back-facing camera.)
Face Tracking (ARFaceTrackingConfiguration), supported only with the front-facing TrueDepth camera on iPhone X, where the user sees an augmented view of theirself in the front-facing camera view. (As #TawaNicolas notes, Apple has sample code here... which, until iPhone X actually becomes available, you can read but not run.)
In addition to the hardware requirement, face tracking and world tracking are mostly orthogonal feature sets. So even though there's a way to use the front facing camera (on iPhone X only), it doesn't give you an experience equivalent to what you get with the back facing camera in ARKit.
Regarding video recording in the AR experience: you can use ReplayKit in an ARKit app same as in any other app.
If you want to record just the camera feed, there isn't a high level API for that, but in theory you might have some success feeding the pixel buffers you get in each ARFrame to AVAssetWriter.
As far as I know, ARKit with Front Facing Camera is only supported for iPhone X.
Here's Apple's sample code regarding this topic.
If you want to access the UIKit or AVFoundation cameras, you still can, but separately from ARSCNView. E.g., I'm loading UIKit's UIImagePickerController from an IBAction and it is a little awkward to do so, but it works for my purposes (loading/creating image and video assets).
Related
I have an ARKit ARSCNView providing an AR experience with the default rear-facing camera. I would like to use other cameras like the ultra-wide camera for that. If possible, it would be great to provide fluent zoom. Is it possible?
I did some research and found out one needs to loop over supported video formats of the AR configuration (e.g. ARWorldTrackingConfiguration.supportedVideoFormats). This gives an array of a few video formats with different fps values, ratios, ... But, there is always the AVCaptureDeviceTypeBuiltInWideAngleCamera. Ultra-wide or Tele does not seem to be included. How do we get the AR experience with an other camera than the default (wide) one?
Thanks in advance.
Apple provides sample project for putting 3d content or face filters on people faces. The 3d content tracks face anchor and move according to it. But this function is only supported with devices that have TrueDepth Camera. For example, we can not use ARSCNFaceGeometry without TrueDepth. How Facebook or 3. party SDKs like Banuba makes this work with devices without depth camera?
As far as I know, using the MediaPipe and get face mesh is the only possibility without TrueDepth camera.
I am currently working on a service to update the content of the video bases on the markers present in the video. I was curious if we can use Vuforia to achieve the same by providing the pre-recorded video as an input to Vuforia instead of the live camera feed from the mobile phone.
TLDR; This is not possible because replacing the camera is not a function that neither Vuforia or ARKit expose.
Aside from not exposing the camera both frameworks use a combination of camera input and sensor data (gyro, accelerometer, compass, altitude, etc) to calculate the camera/phone's position (translation/rotation) relative to the marker image.
The effect you are looking for is image tracking and rendering within a video feed. You should consider OpenCV for the feature point tracking, or some computer vision library. With regard to rendering there's three options SceneKit, Metal, or OpenGL. Following Apple's lead you could use SceneKit for the rendering, similar to how ARKit handles the sensor inputs and uses SceneKit for rendering. If you are ambitious and want to control the rendering as well you could use Metal or OpenGL.
I am trying to apply some filter on GPUImageVideoCamera and above the GPUImageVideoCamera, I am trying to add ARKit. But when ARKit session starts, GPUImageVideoCamera stop working, it seems to be a pause.
I have also try to keep GPUImageVideoCamera related part in my view controller A and I have presented view controller B with ARKit, then also it has the same issue.
Any hint or help will be appreciated.
Thanks in advance.
ARKit to share the same camera with a GPU video instance? probably better to make use of the ARKit session features, according to documentation "An ARSession object coordinates the major processes that ARKit performs on your behalf to create an augmented reality experience. These processes include reading data from the device's motion sensing hardware, controlling the device's built-in camera, and performing image analysis on captured camera images..."
What about using the "currentFrame" property instead?
Is it possible to use iphone X faceID data to create a 3D model of the user face? If yes, can you please give tell me where should I look? I was not reallw able to found something related to this. I found a video on the WWDC about true depth and ARKit but I am not sure that it would help.
Edit:
I just watched a WWDC video and its says that ARKit provides a detailed 3D geometry face. Do you think it's precise enough to create a 3D representation of a person face? Maybe combined with an image? Any idea?
Yes and no.
Yes, there are APIs for getting depth maps captured with the TrueDepth camera, for face tracking and modeling, and for using Face ID to authenticate in your own app:
You implement Face ID support using the LocalAuthentication framework. It's the same API you use for Touch ID support on other devices — you don't get any access to the internals of how the authentication works or the biometric data involved, just a simple yes-or-no answer about whether the user passed authentication.
For simple depth map capture with photos and video, see AVFoundation > Cameras and Media Capture, or the WWDC17 session on such — everything about capturing depth with the iPhone 7 Plus dual back camera also applies to the iPhone X and 8 Plus dual back camera, and to the front TrueDepth camera on iPhone X.
For face tracking and modeling, see ARKit, specifically ARFaceTrackingConfiguration and related API. There's sample code showing the various basic things you can do here, as well as the Face Tracking with ARKit video you found.
Yes, indeed, you can create a 3D representation of a user's face with ARKit. The wireframe you see in that video is exactly that, and is provided by ARKit. With ARKit's SceneKit integration you can easily display that model, add textures to it, add other 3D content anchored to it, etc. ARKit also provides another form of face modeling called blend shapes — this is the more abstract representation of facial parameters, tracking 50 or so muscle movements, that gets used for driving avatar characters like Animoji.
All of this works with a generalized face model, so there's not really anything in there about identifying a specific user's face (and you're forbidden from trying to use it that way in the App Store — see §3.3.52 "If your application accesses face data..." in the developer program license agreement).
No, Apple provides no access to the data or analysis used for enrolling or authenticating Face ID. Gaze tracking / attention detection and whatever parts of Apple's face modeling have to do with identifying a unique user's face aren't parts of the SDK Apple provides.