How to change vuforia video source with a custom camera - opencv

I would like change source camera video by custom camera stream using Vuforia and Unity:
Take the video stream from the camera (Android cam or Webcam)
Improve contrast, brightness or other manually (for example through openCV) and add elements or another pattern that could be optimally recognized by Vuforia.
Resend the modified video stream in Unity 3D and have it detected by Vuforia
It is possible ?
Is there another mode ?

As far as I know, this is not possible. Vuforia takes its input directly from the camera and processes it - the maximum you can do is alter some of the camera settings (if you want to explore that, read about the Vuforia advanced camera API), but this is not enough for you according to your requirements.
Your only option if you must do processing on the input video is to handle the detection and tracking yourself without Vuforia (for example, using OpenCV), which is obviously not so easy...

You can use any software for faking the camera like http://perfectfakewebcam.com/.
just prepare your video and feed it to the fake webcam software and then from unity change vuforia camera device to the fake webcam

Related

ARFaceTracking and Face Filters on devices without depth camera

Apple provides sample project for putting 3d content or face filters on people faces. The 3d content tracks face anchor and move according to it. But this function is only supported with devices that have TrueDepth Camera. For example, we can not use ARSCNFaceGeometry without TrueDepth. How Facebook or 3. party SDKs like Banuba makes this work with devices without depth camera?
As far as I know, using the MediaPipe and get face mesh is the only possibility without TrueDepth camera.

Can we use vuforia for embedding markers into pre-recorded videos instead of a live camera feed?

I am currently working on a service to update the content of the video bases on the markers present in the video. I was curious if we can use Vuforia to achieve the same by providing the pre-recorded video as an input to Vuforia instead of the live camera feed from the mobile phone.
TLDR; This is not possible because replacing the camera is not a function that neither Vuforia or ARKit expose.
Aside from not exposing the camera both frameworks use a combination of camera input and sensor data (gyro, accelerometer, compass, altitude, etc) to calculate the camera/phone's position (translation/rotation) relative to the marker image.
The effect you are looking for is image tracking and rendering within a video feed. You should consider OpenCV for the feature point tracking, or some computer vision library. With regard to rendering there's three options SceneKit, Metal, or OpenGL. Following Apple's lead you could use SceneKit for the rendering, similar to how ARKit handles the sensor inputs and uses SceneKit for rendering. If you are ambitious and want to control the rendering as well you could use Metal or OpenGL.

Is there front facing camera support with ARKit?

How can we access Front Facing Camera Images with ARCamera or ARSCNView and is it possible to record ARSCNView just like Camera Recording?
Regarding the front-facing camera: in short, no.
ARKit offers two basic kinds of AR experience:
World Tracking (ARWorldTrackingConfiguration), using the back-facing camera, where a user looks "through" the device at an augmented view of the world around them. (There's also AROrientationTrackingConfiguration, which is a reduced quality version of world tracking, so it still uses only the back-facing camera.)
Face Tracking (ARFaceTrackingConfiguration), supported only with the front-facing TrueDepth camera on iPhone X, where the user sees an augmented view of theirself in the front-facing camera view. (As #TawaNicolas notes, Apple has sample code here... which, until iPhone X actually becomes available, you can read but not run.)
In addition to the hardware requirement, face tracking and world tracking are mostly orthogonal feature sets. So even though there's a way to use the front facing camera (on iPhone X only), it doesn't give you an experience equivalent to what you get with the back facing camera in ARKit.
Regarding video recording in the AR experience: you can use ReplayKit in an ARKit app same as in any other app.
If you want to record just the camera feed, there isn't a high level API for that, but in theory you might have some success feeding the pixel buffers you get in each ARFrame to AVAssetWriter.
As far as I know, ARKit with Front Facing Camera is only supported for iPhone X.
Here's Apple's sample code regarding this topic.
If you want to access the UIKit or AVFoundation cameras, you still can, but separately from ARSCNView. E.g., I'm loading UIKit's UIImagePickerController from an IBAction and it is a little awkward to do so, but it works for my purposes (loading/creating image and video assets).

In xcode how can I do a real-time scan of the camera preview frame by frame using Tesseract OCR

As many of you know Tesseract does character recognition in still photos or images. I'm using xcode for my iOS app and I got this problem. How can I use tesseract to scan the camera live preview. An app that does this is the Word Lens app, it makes a frame by frame live recognition and translation of the text being previewed by the camera. I'm trying to do this live character recognition whithout the translation part. What is the best approach? How can I do a real-time scan of the camera preview frame by frame using Tesseract OCR? Thanks.
I have tested it and Performance is too low. Camera output eight pictures per second, but OCR process one need about 2 seconds.
The link A (quasi-) real-time video processing on iOS
The link tesseract-ios
and How can I make tesseract on iOS faster
Maybe we need use OpenCV.
Or, alternative you can use other free product, that does OCR in camera preview: ABBYY Real-Time Recognition OCR.
Disclaimer: I work for ABBYY.

Can we detect a live image for perfoming augmented reality using Qualcomm QCAR sdk

I want to detect an image of my hand(wrist) using QCAR sdk of Qualcomm so that then we can place a virtual object (e.g watch) on it. Is it possible to achieve this using Qualcomm sdk. Can we detect a live image of hand everytime using QCAR-sdk.
No. QCAR can only reliably detect and track planar (i.e. flat) images, and the image needs to have certain characteristics, such as sufficient contrast and complexity. It can't track 3D surfaces, like your hand. You'd need to have the person hold the tracking target, or attach it to their hand (e.g. using a glove ).

Resources