How to run ARCore face tracking on per camera frame (instead of onDraw)? - augmented-reality

ARCore has very impressive face-tracking capabilities. But all the examples I have seen define a drawable and then call the face tracker to update on per-draw. Is there any to get updated face meshes with every new camera frame?

Related

Creating a Trajectory using a 360 camera video without use of GPS, IMU, sensor, ROS or LIDAR

Creating a Trajectory using a 360 camera video without use of GPS, IMU, sensor, ROS or LIDAR
Input is a video, created using a 360 camera(Samsung Gear 360). I need to plot a trajectory (without the use of ground truth poses) as I move around in an indoor location(that is I need to know the camera locations and plot accordingly).
Firstly, camera calibration was done by capturing 21 pics of the chessboard, and by using OpenCV methods, camera matrix(3x3 matrix which includes fx,fy,cx,cy, and skew factor) was achieved which was then given input to a text file.
Have tried: Feature detection(ORB, SIFT, AKAZE..) and tracking (Flann and Brute Force) methods. It works well for a single space but fails if a video is of a multi-storey building. Tested on this multi-storey building:https://youtu.be/6DPFcKoHiak and results obtained were:
An example of camera motion estimation that is required: https://arxiv.org/pdf/2003.08056.pdf
Any help regarding on how to plot camera poses with the use of VSLAM Visual odometry or any other.

ARFaceTracking and Face Filters on devices without depth camera

Apple provides sample project for putting 3d content or face filters on people faces. The 3d content tracks face anchor and move according to it. But this function is only supported with devices that have TrueDepth Camera. For example, we can not use ARSCNFaceGeometry without TrueDepth. How Facebook or 3. party SDKs like Banuba makes this work with devices without depth camera?
As far as I know, using the MediaPipe and get face mesh is the only possibility without TrueDepth camera.

Can we use vuforia for embedding markers into pre-recorded videos instead of a live camera feed?

I am currently working on a service to update the content of the video bases on the markers present in the video. I was curious if we can use Vuforia to achieve the same by providing the pre-recorded video as an input to Vuforia instead of the live camera feed from the mobile phone.
TLDR; This is not possible because replacing the camera is not a function that neither Vuforia or ARKit expose.
Aside from not exposing the camera both frameworks use a combination of camera input and sensor data (gyro, accelerometer, compass, altitude, etc) to calculate the camera/phone's position (translation/rotation) relative to the marker image.
The effect you are looking for is image tracking and rendering within a video feed. You should consider OpenCV for the feature point tracking, or some computer vision library. With regard to rendering there's three options SceneKit, Metal, or OpenGL. Following Apple's lead you could use SceneKit for the rendering, similar to how ARKit handles the sensor inputs and uses SceneKit for rendering. If you are ambitious and want to control the rendering as well you could use Metal or OpenGL.

I can´t use the face tracker with the back camera in android

I create a game to add filter and 3d object in real time, but I can´t configure the back camera to face tracker, for example I need add 3d object in a people with the back camere, and this have to detect the rotation or move. Thanks a lot.
AugmentedFaces API works only with front camera.
You can check out shared camera access documentation and see if you can combine ARCore with Face Detection from Firebase or OpenCV

Is stitching module of OpenCV able to stitch images taken from a parallel motion camera?

I was wondering if the stitching(http://docs.opencv.org/modules/stitching/doc/stitching.html) module of OpenCV is able to stitch the images taken from a camera that is in parallel motion to the plane which is being photographed ?
I know that generally all the panoramic stitching tools assume that the center of the camera is fixed and that the camera only experiences motion such as pan or pitch.
I was thinking if I can use this module to stitch the image taken from a camera which moves parallel to the plane. The idea is to create a panoramic map of the ground.
Regards
Just for the record.
The current stitching utility in open cv does not consider translation of the camera and only assumes that camera is rotated around its axis. So, basically it tries to create and project images on cylindrical or spherical canvas.
But in my case, I needed to consider the translation motion while predicting the camera transformation and this is not possible with the existing stitching utility of opencv.
All these observations are made based on the code walk-through of opencv code and through trials.
But, you are welcome to correct this information or to add more information so that this can be a useful future reference.
Regards

Resources