Show 3d object on device live camera in Unity3d for iOS - ios

How can I render a 3d or 2d (.jpg or .png) object at device live camera in Unity3d? This 3d object shouldn't be stuck to the camera. It has to show at specific point, like we can set any object in game scene at particular location.

Related

Can we use vuforia for embedding markers into pre-recorded videos instead of a live camera feed?

I am currently working on a service to update the content of the video bases on the markers present in the video. I was curious if we can use Vuforia to achieve the same by providing the pre-recorded video as an input to Vuforia instead of the live camera feed from the mobile phone.
TLDR; This is not possible because replacing the camera is not a function that neither Vuforia or ARKit expose.
Aside from not exposing the camera both frameworks use a combination of camera input and sensor data (gyro, accelerometer, compass, altitude, etc) to calculate the camera/phone's position (translation/rotation) relative to the marker image.
The effect you are looking for is image tracking and rendering within a video feed. You should consider OpenCV for the feature point tracking, or some computer vision library. With regard to rendering there's three options SceneKit, Metal, or OpenGL. Following Apple's lead you could use SceneKit for the rendering, similar to how ARKit handles the sensor inputs and uses SceneKit for rendering. If you are ambitious and want to control the rendering as well you could use Metal or OpenGL.

I can´t use the face tracker with the back camera in android

I create a game to add filter and 3d object in real time, but I can´t configure the back camera to face tracker, for example I need add 3d object in a people with the back camere, and this have to detect the rotation or move. Thanks a lot.
AugmentedFaces API works only with front camera.
You can check out shared camera access documentation and see if you can combine ARCore with Face Detection from Firebase or OpenCV

Is there front facing camera support with ARKit?

How can we access Front Facing Camera Images with ARCamera or ARSCNView and is it possible to record ARSCNView just like Camera Recording?
Regarding the front-facing camera: in short, no.
ARKit offers two basic kinds of AR experience:
World Tracking (ARWorldTrackingConfiguration), using the back-facing camera, where a user looks "through" the device at an augmented view of the world around them. (There's also AROrientationTrackingConfiguration, which is a reduced quality version of world tracking, so it still uses only the back-facing camera.)
Face Tracking (ARFaceTrackingConfiguration), supported only with the front-facing TrueDepth camera on iPhone X, where the user sees an augmented view of theirself in the front-facing camera view. (As #TawaNicolas notes, Apple has sample code here... which, until iPhone X actually becomes available, you can read but not run.)
In addition to the hardware requirement, face tracking and world tracking are mostly orthogonal feature sets. So even though there's a way to use the front facing camera (on iPhone X only), it doesn't give you an experience equivalent to what you get with the back facing camera in ARKit.
Regarding video recording in the AR experience: you can use ReplayKit in an ARKit app same as in any other app.
If you want to record just the camera feed, there isn't a high level API for that, but in theory you might have some success feeding the pixel buffers you get in each ARFrame to AVAssetWriter.
As far as I know, ARKit with Front Facing Camera is only supported for iPhone X.
Here's Apple's sample code regarding this topic.
If you want to access the UIKit or AVFoundation cameras, you still can, but separately from ARSCNView. E.g., I'm loading UIKit's UIImagePickerController from an IBAction and it is a little awkward to do so, but it works for my purposes (loading/creating image and video assets).

Object Detection with moving camera

I understand that with a moving object and a stationary camera, it is easy to detect objects by subtracting the previous and current camera frames. It is also possible to detect moving objects when the camera is moving freely around the scene.
But is it possible to detect stationary objects with a camera rotating around the object? The movement of the camera is predefined and the camera is only restricted to the specified path around the object.
Try camshift demo which locates in opencv source code with this path: samples/cpp/camshiftdemo.cpp. Or other algorithms like meanshift,KCF,etc. These are all object tracking algorithms.

How to create a 3d map for an augmented reality game?

i want to build a game like this one: AR Invaders
how can i create a 3D map? the camera of the iphone should be the middle of the 3D circle map.
i dont know if 3d circle map is the correct word to explain it.
the iphone should be the object middle of the 3d circle map and around the iphone should be the objects to kill.
so how can i create this augmented reality map, so when i move foreward or backward with the iphone the objects also grow? or when i move the iphone upward?
any suggestions?
Looks like game uses only IMU data and video capture from front camera. If you want to use not only rotations in your game, probably you need some kind of SLAM like this one http://13thlab.com/ballinvasion/

Resources