Unity3D on iOS, inspecting the device camera image in Obj-C - ios

I have a Unity/iOS app that captures the user's photo and displays it in the 3D environment. Now I'd like to leverage CIFaceFeature to find eye positions, which requires accessing the native (Objective-C) layer. My flow looks like:
Unity -> WebCamTexture (encode and send image to native -- this is SLOW)
Obj-C -> CIFaceFeature (find eye coords)
Unity -> Display eye positions
I've got a working prototype, but it's slow because I'm capturing the image in Unity (WebCamTexture) and then sending it to Obj-C to do the FaceFeature detection. It seems like there should be a way to simply ask my Obj-C class to "inspect the active camera". This would have to be much, much faster than encoding and passing an image.
So my question, in a nutshell:
Can I query in Obj-C 'is there a camera currently capturing?'
If so, how do I 'snapshot' the image from that currently running session?
Thanks!

You can access the Camera's preview capture stream by changing CameraCapture.mm in unity.
I suggest that you have a look at some existing plugin called Camera Capture for an example of how additional camera I/O functionality can be added to the capture session / "capture pipeline".
To set you off in the right direction. have a look at the function initCapture in CameraCapture.mm :
- (bool)initCapture:(AVCaptureDevice*)device width:(int)w height:(int)h fps:(float)fps
Here you will be able to add to the capture session.
And then you should have a look at the code sample provided by Apple on Facial Recognition :
https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
Cheers

Unity 3D allows execution of native code. In the scripting reference, look for native plugins. In this way you can display a native iOS view (with the camera view, possibly hidden depending on your requirements) and run Objective C code. Then return the results of eye detection to Unity if you need it in a 3D view.

Related

Creating singular view in Unity using GoogleVR ( GoogleCardboard ) for iOS

I am a college student attempting to build a VR application for iOS using Unity paired with GoogleVR sdk (Google Cardboard). I can get my app to run on an iPad, but the display on the screen is through two viewports (or cameras, not sure the correct terminology) for two eyes.
While this may be contradictory to the idea of VR, I actually only want a single central camera's perspective and for that display to fill the whole screen.
I've been searching through the Unity project files and the google Cardboard files, but haven't found a way to do this. Is there a simple way to turn off the two eye display and instead do a single view? If so, what file would I modify?
Thanks!
The main things that the Cardboard SDK give you on iOS is stereoscopic rendering, control of camera rotation based on the gyroscope, and the gaze pointer. If you don't want stereoscopic rendering, you can disable VR support in XR Settings and use some simple replacements for the other two items. You can add a regular camera to your scene and then use a script like this to set its rotation based on the phone's gyroscope:
using UnityEngine;
class SceneManager : MonoBehaviour {
void Start() {
// Enable the gyro so that it can be used to control the camera rotation.
Input.gyro.enabled = true;
}
void Update() {
// Update the camera rotation based on the gyroscope.
Camera.main.transform.Rotate(
-Input.gyro.rotationRateUnbiased.x,
-Input.gyro.rotationRateUnbiased.y,
Input.gyro.rotationRateUnbiased.z
);
}
}
To replace the gaze pointer, you can use Unity's standalone input module to route screen touch events through the input system (e.g. to trigger scripts that implement IPointerClickHandler).

Placing objects automatically when ground plane detected with vuforia

I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.

create dynamic camera overlay that changes with scences

Hi I am new to IOS developing. I've been searching for a while, just can not find a good example.
I like to create an app dynamically detect paper,
The app should be able to generate dynamic camera overlay by analysing current camera scenes.
like Adobe Scan.
I can program the sampling and detection algorithm myself,
but really not good on the IOS interface and technique.
I am using objective-C as programming language.
I like to know:
1, how to change the camera overlay in real time?
2, how should i put a ticker that repeatedly check camera scenes?
3, how can I access and modify a picture from iOS objective C?
Please give me some ideas or show me some examples.

Is there any way to capture video or image from Unity (which use Vuforia) iOS application?

I have an Augmented Reality functionality made using Unity + Vuforia plugin which I integrated into the iOS application. The app uses the camera as background and when you navigate camera to some marker 3D object will appear on it.
My task is to add buttons which will start and stop capture video (or image) from the camera. The output should be a video with camera scene + 3D object.
I made some investigation, but the only solution I found is to convert the view of AVCaptureVideoPreviewLayer on which camera preview is showing to a video (or image). But from my opinion, this solution is inefficient and not flexible.
Is there any way to get a current instance of the AVCaptureSession from Unity (or maybe Vuforia plugin)? Or maybe there is another way to solve my problem?
Any pieces of advice or guides will be very helpful.
I don't think you should use AVCaptureSession to get the preview and even do the capture operation in Cocoa-Touch instead you should capture the image in Unity and pass the data to Cocoa-Touch native API.
Here is the link how to capture the screenshot in Unity.

Is is possible to use Vuforia without a camera?

Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!

Resources