Vuforia 4 and unity 5 AR on touchscreen - augmented-reality

I am completely new to AR. I created 3 markers using vuforia and unity and each display a 3d model. So far when I show a marker to my webcam it displays the 3d object. Now I have been trying to create on touchscreen function. I want that when the user seeing my 3d models, when he touches them a message pops up. I followed different tutorials from different forums and videos in youtube but none of them worked. Could you please help giving me a detailed description of what i need to do? or if there are any bugs in unity?

It is very simple.
Add colliders to your game object.
You need reference to the AR Camera.
Use the AR Camera to get the ray.
Use Raycast to get the hit object.
Just add this update code to some component in your scene:
// We need a reference to the AR Camera in the scene
public Camera ARCameraReference;
// Add this update code to any component
void Update()
{
if (Input.GetMouseButton(0))
{
Ray ray = ARCameraReference.ScreenPointToRay(Input.mousePosition);
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
Debug.Log("Hit : " + hit.collider.gameObject.name);
// Do whatever you want from here
// ...
}
}
}
For your reference:
http://docs.unity3d.com/ScriptReference/Collider.html
http://docs.unity3d.com/ScriptReference/Physics.Raycast.html

Related

Orient text to always face the camera with ARkit & RealityKit

Ok so I'm currently developing an AR app using ARkit + Realitykit, and I struggle with a basic feature: I want to display some text near my 3D object, but that text needs to always face the camera to be readable. I couldn't find any solution to display pure 2D text so I decided I would display a 3D text mesh and orient it toward the camera using a subscription, but I can't manage to make it work.
Here's the subscription responsible for the orientation change:
var labelSubscription: Cancellable!
//...
labelSubscription = arView.scene.subscribe(to: SceneEvents.Update.self) { (_) in
labelEntity.look(at: arView.cameraTransform.translation, from: labelEntity.position(relativeTo: nil), upVector: DOWN, relativeTo: nil)
print("update triggered")
}
But this doesn't do anything and the print statement is never reached.
Also, even when trying to just call labelEntity.look after instantiating my entity (without a subscription to just orient at the start) doesn't seem to do anything.
How to make that work ? And is there a more convenient feature for displaying 2D text in my AR view? Tysm :)
PS: I'm new to swift in general so not sure of what types to put in the closure and what design patterns to use here. Any good learning material for ARkit would be appreciated.
EDIT: Here's a gist with the full code if needed https://gist.github.com/nohehf/b8ef8d83cc0f0f68abafba454668a779
EDIT2: Setting a timer to call the look function periodically allowed me to see that it's wrong too, so I have two issues: My look function doesn't do what I expect (the orientation of the text is wrong, see image below), and the scene subscription is never called.
Here the text should be facing the camera

Creating singular view in Unity using GoogleVR ( GoogleCardboard ) for iOS

I am a college student attempting to build a VR application for iOS using Unity paired with GoogleVR sdk (Google Cardboard). I can get my app to run on an iPad, but the display on the screen is through two viewports (or cameras, not sure the correct terminology) for two eyes.
While this may be contradictory to the idea of VR, I actually only want a single central camera's perspective and for that display to fill the whole screen.
I've been searching through the Unity project files and the google Cardboard files, but haven't found a way to do this. Is there a simple way to turn off the two eye display and instead do a single view? If so, what file would I modify?
Thanks!
The main things that the Cardboard SDK give you on iOS is stereoscopic rendering, control of camera rotation based on the gyroscope, and the gaze pointer. If you don't want stereoscopic rendering, you can disable VR support in XR Settings and use some simple replacements for the other two items. You can add a regular camera to your scene and then use a script like this to set its rotation based on the phone's gyroscope:
using UnityEngine;
class SceneManager : MonoBehaviour {
void Start() {
// Enable the gyro so that it can be used to control the camera rotation.
Input.gyro.enabled = true;
}
void Update() {
// Update the camera rotation based on the gyroscope.
Camera.main.transform.Rotate(
-Input.gyro.rotationRateUnbiased.x,
-Input.gyro.rotationRateUnbiased.y,
Input.gyro.rotationRateUnbiased.z
);
}
}
To replace the gaze pointer, you can use Unity's standalone input module to route screen touch events through the input system (e.g. to trigger scripts that implement IPointerClickHandler).

ARcore Augmented images with 3D object interaction

I want to build a digital catelog application
where i detect the image in a catelogue and place a 3D object on it
This can be achieved by ARcore Augmented images.
what i need is When i click/touch the 3D object I need to show some information and videos
For this particular task i need some SDK options
without Vuforia can this be achieved using ARCore+Unity or Android OpenCV or any other.
This requires a lot of work from creating animations and layers to define colliders and controlling with backend code.
First you create the animations and animation controllers, then add colliders to the hot spots where you want to click on the object (e.g. touch the door to open), then map each collider click event to fire a specific animation.
actually it is better to follow a tutorial that shows the animating basics, then it will be easy to combine with AR project,
https://unity3d.com/learn/tutorials/s/animation

How to turn off/on AR in an augmented reality app using ARkit?

I'm starting to learn how to use ARkit and I would like to add a button like the one in the Pokemon go application where you can switch between AR ON (with a model into the real world) and AR OFF (without using a camera, having just the 3D model with a fixed background). Are there any easy way to do it?
Another good example of what you're asking about is the AR Quick Look feature in iOS 12 (see WWDC video or this article): when you quick look a USDZ file you get a generic white-background preview where you can spin the object around with touch gestures, and you can seamlessly switch back and forth between that and a real-world AR camera view.
You've asked about ARKit but not said anything about which renderer you're using. Remember, ARKit itself only tells you about the real world and provides live camera imagery, but it's up to you to display that image and whatever 3D overlay content you want — either by using a 3D graphics framework like SceneKit, Unity, or Unreal, or by creating your own renderer with Metal. So the rest of this answer is renderer-agnostic.
There are two main differences between an AR view and a non-AR 3D view of the same content:
An AR view displays the live camera feed in the background; a non-AR view doesn't.
3D graphics frameworks typically involve some notion of a virtual camera that determines your view of the 3D scene — by moving the camera, you change what part of the scene you see and what angle you see it from. In AR, the virtual camera is made to match the movement of the real device.
Hence, to switch between AR and non-AR 3D views of the same content, you just need to manipulate those differences in whatever way your renderer allows:
Hide the live camera feed. If your renderer lets you directly turn it off, do that. Otherwise you can put some foreground content in front of it, like an opaque skybox and/or a plane for your 3D models to rest on.
Directly control the camera yourself and/or provide touch/gesture controls for the user to manipulate the camera. If your renderer supports multiple cameras in the scene and choosing which one is currently used for rendering, you can keep and switch between the ARKit-managed camera and your own.

Unity3D on iOS, inspecting the device camera image in Obj-C

I have a Unity/iOS app that captures the user's photo and displays it in the 3D environment. Now I'd like to leverage CIFaceFeature to find eye positions, which requires accessing the native (Objective-C) layer. My flow looks like:
Unity -> WebCamTexture (encode and send image to native -- this is SLOW)
Obj-C -> CIFaceFeature (find eye coords)
Unity -> Display eye positions
I've got a working prototype, but it's slow because I'm capturing the image in Unity (WebCamTexture) and then sending it to Obj-C to do the FaceFeature detection. It seems like there should be a way to simply ask my Obj-C class to "inspect the active camera". This would have to be much, much faster than encoding and passing an image.
So my question, in a nutshell:
Can I query in Obj-C 'is there a camera currently capturing?'
If so, how do I 'snapshot' the image from that currently running session?
Thanks!
You can access the Camera's preview capture stream by changing CameraCapture.mm in unity.
I suggest that you have a look at some existing plugin called Camera Capture for an example of how additional camera I/O functionality can be added to the capture session / "capture pipeline".
To set you off in the right direction. have a look at the function initCapture in CameraCapture.mm :
- (bool)initCapture:(AVCaptureDevice*)device width:(int)w height:(int)h fps:(float)fps
Here you will be able to add to the capture session.
And then you should have a look at the code sample provided by Apple on Facial Recognition :
https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
Cheers
Unity 3D allows execution of native code. In the scripting reference, look for native plugins. In this way you can display a native iOS view (with the camera view, possibly hidden depending on your requirements) and run Objective C code. Then return the results of eye detection to Unity if you need it in a 3D view.

Resources