I can´t use the face tracker with the back camera in android - arcore

I create a game to add filter and 3d object in real time, but I can´t configure the back camera to face tracker, for example I need add 3d object in a people with the back camere, and this have to detect the rotation or move. Thanks a lot.

AugmentedFaces API works only with front camera.
You can check out shared camera access documentation and see if you can combine ARCore with Face Detection from Firebase or OpenCV

Related

ARFaceTracking and Face Filters on devices without depth camera

Apple provides sample project for putting 3d content or face filters on people faces. The 3d content tracks face anchor and move according to it. But this function is only supported with devices that have TrueDepth Camera. For example, we can not use ARSCNFaceGeometry without TrueDepth. How Facebook or 3. party SDKs like Banuba makes this work with devices without depth camera?
As far as I know, using the MediaPipe and get face mesh is the only possibility without TrueDepth camera.

GPUImage : GPUImageVideoCamera with ARKit

I am trying to apply some filter on GPUImageVideoCamera and above the GPUImageVideoCamera, I am trying to add ARKit. But when ARKit session starts, GPUImageVideoCamera stop working, it seems to be a pause.
I have also try to keep GPUImageVideoCamera related part in my view controller A and I have presented view controller B with ARKit, then also it has the same issue.
Any hint or help will be appreciated.
Thanks in advance.
ARKit to share the same camera with a GPU video instance? probably better to make use of the ARKit session features, according to documentation "An ARSession object coordinates the major processes that ARKit performs on your behalf to create an augmented reality experience. These processes include reading data from the device's motion sensing hardware, controlling the device's built-in camera, and performing image analysis on captured camera images..."
What about using the "currentFrame" property instead?

How to use the iPhone X faceID data

Is it possible to use iphone X faceID data to create a 3D model of the user face? If yes, can you please give tell me where should I look? I was not reallw able to found something related to this. I found a video on the WWDC about true depth and ARKit but I am not sure that it would help.
Edit:
I just watched a WWDC video and its says that ARKit provides a detailed 3D geometry face. Do you think it's precise enough to create a 3D representation of a person face? Maybe combined with an image? Any idea?
Yes and no.
Yes, there are APIs for getting depth maps captured with the TrueDepth camera, for face tracking and modeling, and for using Face ID to authenticate in your own app:
You implement Face ID support using the LocalAuthentication framework. It's the same API you use for Touch ID support on other devices — you don't get any access to the internals of how the authentication works or the biometric data involved, just a simple yes-or-no answer about whether the user passed authentication.
For simple depth map capture with photos and video, see AVFoundation > Cameras and Media Capture, or the WWDC17 session on such — everything about capturing depth with the iPhone 7 Plus dual back camera also applies to the iPhone X and 8 Plus dual back camera, and to the front TrueDepth camera on iPhone X.
For face tracking and modeling, see ARKit, specifically ARFaceTrackingConfiguration and related API. There's sample code showing the various basic things you can do here, as well as the Face Tracking with ARKit video you found.
Yes, indeed, you can create a 3D representation of a user's face with ARKit. The wireframe you see in that video is exactly that, and is provided by ARKit. With ARKit's SceneKit integration you can easily display that model, add textures to it, add other 3D content anchored to it, etc. ARKit also provides another form of face modeling called blend shapes — this is the more abstract representation of facial parameters, tracking 50 or so muscle movements, that gets used for driving avatar characters like Animoji.
All of this works with a generalized face model, so there's not really anything in there about identifying a specific user's face (and you're forbidden from trying to use it that way in the App Store — see §3.3.52 "If your application accesses face data..." in the developer program license agreement).
No, Apple provides no access to the data or analysis used for enrolling or authenticating Face ID. Gaze tracking / attention detection and whatever parts of Apple's face modeling have to do with identifying a unique user's face aren't parts of the SDK Apple provides.

open cv and c++ object detection real time

hi i use a open cv for detect object and without problem >>
but the problem when i move the camera every think is detected because i detect without color with real time how can i recognize if the object moving or the camera i thinking about this and found some idea its
.........
first add point on center of image (the image come from video)
and when i check for moving object if its distance didnt change so its didnt move and the moving its from camera did my idea good and how to add object to or poit to image
I assume you would like to tell whether the object is moving or the camera. When there is only one camera, the solutions are usually using a reference (not-moving) object or use a mechanic sensor for camera movement. If you use two camera, you can usually calibrate them and use stereo vision formulations to solve the problem.

Detect presence of objects using OpenCV in live iphone camera

Can anyone help me to detect realtime objects in iPhone camera using OpenCV?
My actual objective is to give an alert to users while an object interfering on a specific location of my application camera view.
My current thinking is to capture an image with respect to my camera overlay view which represents a specific location of my camera view. And then I process that image using OpenCV to detect objects by colors. If there I can identify an object in a specific image. I will give an alert to user in camera overlay itself. I coudn't know how I can detect an object from UIImage.
Please direct me if anyone knows some other good way to achieve my goal. Thanks in advance.
I solved my issue by the following way,
Created an image capture module with AVFoundation classes (AVCaptureSession)
Capturing simultaneous image buffer through a timer working along with camera module.
Processing captured frames to find objects through OpenCV
(Cropping, grayscale, threshold, feature detection etc...)
Referral Link: http://docs.opencv.org/doc/tutorials/tutorials.html
Alerting user through animated camera overlay view
Anyway the detection of objects through image processing is not much accurate. We need to have a object sensor (like a depth sensor in Kinet camera or similar) to detect objects in real scenario in live streaming, or may be we have to create AI for it perfect working.

Resources