How to convert ARCore frames to WebRTC frames - arcore

I'm making a video call android app with augmented face effects using ARCore and WebRTC.
However, the frame structure of WebRTC and ARCore is different.
So I use PixcelCopy to convert ARCore frames to Bitmap and then convert them to WebRTC frames.
However, the audio and video are out of sync with this method.
Is there any other way?
Any advice would be of great help to me
thanks

Related

Can we use vuforia for embedding markers into pre-recorded videos instead of a live camera feed?

I am currently working on a service to update the content of the video bases on the markers present in the video. I was curious if we can use Vuforia to achieve the same by providing the pre-recorded video as an input to Vuforia instead of the live camera feed from the mobile phone.
TLDR; This is not possible because replacing the camera is not a function that neither Vuforia or ARKit expose.
Aside from not exposing the camera both frameworks use a combination of camera input and sensor data (gyro, accelerometer, compass, altitude, etc) to calculate the camera/phone's position (translation/rotation) relative to the marker image.
The effect you are looking for is image tracking and rendering within a video feed. You should consider OpenCV for the feature point tracking, or some computer vision library. With regard to rendering there's three options SceneKit, Metal, or OpenGL. Following Apple's lead you could use SceneKit for the rendering, similar to how ARKit handles the sensor inputs and uses SceneKit for rendering. If you are ambitious and want to control the rendering as well you could use Metal or OpenGL.

DSLR Canon Videocapture in OpenCV

I need to capture frame from a DSLR camera. I know that i can use
Videocapture cap(0);
for capture from default webcam. If i connect with usb the camera and run the code, It seems like he cant find the camera.
What should i do for capture from the DSLR?
In general, I have found getting OpenCV to work with anything besides a basic webcam almost impossible. In theory, I think it uses the UVC driver, but I have had almost 0 luck getting it to read. One thing you can try is using VLC and see if you can capture a video stream from your camera with it. If you can you might get lucky and figure which camera or video device the DSLR actually is.
If your DSLR has a development SDK maybe you can capture frame using their interface and then use OpenCV for processing. I do this for a project. I have a 3rd party SDK that I use to find and control the camera and them I move the video data into OpenCV (EmguCV) for processing.
Doug

How to change vuforia video source with a custom camera

I would like change source camera video by custom camera stream using Vuforia and Unity:
Take the video stream from the camera (Android cam or Webcam)
Improve contrast, brightness or other manually (for example through openCV) and add elements or another pattern that could be optimally recognized by Vuforia.
Resend the modified video stream in Unity 3D and have it detected by Vuforia
It is possible ?
Is there another mode ?
As far as I know, this is not possible. Vuforia takes its input directly from the camera and processes it - the maximum you can do is alter some of the camera settings (if you want to explore that, read about the Vuforia advanced camera API), but this is not enough for you according to your requirements.
Your only option if you must do processing on the input video is to handle the detection and tracking yourself without Vuforia (for example, using OpenCV), which is obviously not so easy...
You can use any software for faking the camera like http://perfectfakewebcam.com/.
just prepare your video and feed it to the fake webcam software and then from unity change vuforia camera device to the fake webcam

Detecting QR codes in individual frames of a camera stream iOS

Im currently analysing frames of a camera video stream using OpenCV for Augmented Reality markers on iOS. I need to also be able to analyse each frame to see if they contain QR codes. Im currently using ZBarSDK for iOS to complete the task, but my performance has decreased from around 30fps to between 7-11fps when this is in use. I know ZBarSDK can be configured to increase frame rate by: only looking for QR codes and ignoring other barcodes, as well as adjusting the stride of each sweep but this has no real effect.
I noticed that ZXing is an alternative though it seems to be deprecated now and AVCaptureMetadataOutput from a AVCaptureSession is the way forward, but I cannot see if it is frame by frame processing.
Is there any other library that allows the processing of video streams frame by frame to detect QR codes on iOS? Or could anyone point me in the direction of maybe an OpenCV tutorial for writing QR code detection and QR content extraction from scratch?
Any help will be appreciated.

In xcode how can I do a real-time scan of the camera preview frame by frame using Tesseract OCR

As many of you know Tesseract does character recognition in still photos or images. I'm using xcode for my iOS app and I got this problem. How can I use tesseract to scan the camera live preview. An app that does this is the Word Lens app, it makes a frame by frame live recognition and translation of the text being previewed by the camera. I'm trying to do this live character recognition whithout the translation part. What is the best approach? How can I do a real-time scan of the camera preview frame by frame using Tesseract OCR? Thanks.
I have tested it and Performance is too low. Camera output eight pictures per second, but OCR process one need about 2 seconds.
The link A (quasi-) real-time video processing on iOS
The link tesseract-ios
and How can I make tesseract on iOS faster
Maybe we need use OpenCV.
Or, alternative you can use other free product, that does OCR in camera preview: ABBYY Real-Time Recognition OCR.
Disclaimer: I work for ABBYY.

Resources