Realtime augmented reality color changing? - augmented-reality

I am developing an augmented reality application where it recognizes objects and change the colors real time (example color of walls of a house). Can I use Vuforia SDK for this or are there any other better sdks to be used ?

Basically yes - Vuforia is able to detect pre-known images and let you know when something was detected, and what you do then is up to you. However, it depends on the objects you plan to recognize - Vuforia does not allow just any image to be detected, the image must have enough features. You can read about this here: https://developer.vuforia.com/library/articles/Solution/Natural-Features-and-Ratings

Related

Placing objects automatically when ground plane detected with vuforia

I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.

Is it possible to recognise light patterns on iOS?

Is it possible to recognise light patterns on iOS?
Is there a native iOS SDK to do so?
Use case:
Detect light patterns (e.g. on / off) using smartphone camera
Background information:
Apple has acquired last year Metaio so I presume at some point we will have such SDK, but for now I presume that the best way to achieve this is by using third party SDK or using image capturing and processing the image (if the images are simple enough so that a simple algorithm can be applied).
You could take a look at Kudan AR. https://www.kudan.eu/
They currently offer a SDK for iOS, not yet for Android. Their tracking quality is phenomenally good. But, I do not know if it is appropriate to your goals. It would be best if you talk to them and ask if their tracking would fit your needs.

Reproduce Google Heart with iOS7 MapKit's custom tiles

I would love to reproduce GoogleHeart-like 3D map flyover even when offline.
As of iOS 7 MapKit allows us to draw custom offline tiles. It also allows us to set a Camera in order to see the map in 3D or 2.5D as you may wish to call it.
I was wondering: can I draw a 3D shape like Apple does for its flyover feature, on my custom tiles?
I need to apply a "bump-map" to the map in order to get a GoogleHeart-like 3D view and I was wondering if Apple would allow me to do just that with iOS 7 and custom tiles rendering + camera settings.
Thanks
I have experimented pretty extensively with this, but there is no supported way to do this. Right now, Apple only offers raster tile-based overlay, albeit with automatic 2.5/3D transformation when overlaid on a map. Hopefully in the future they will support 3D API and/or custom, say, OpenGL-based augmentation to the map.

Is is possible to use Vuforia without a camera?

Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!

Unity3D on iOS, inspecting the device camera image in Obj-C

I have a Unity/iOS app that captures the user's photo and displays it in the 3D environment. Now I'd like to leverage CIFaceFeature to find eye positions, which requires accessing the native (Objective-C) layer. My flow looks like:
Unity -> WebCamTexture (encode and send image to native -- this is SLOW)
Obj-C -> CIFaceFeature (find eye coords)
Unity -> Display eye positions
I've got a working prototype, but it's slow because I'm capturing the image in Unity (WebCamTexture) and then sending it to Obj-C to do the FaceFeature detection. It seems like there should be a way to simply ask my Obj-C class to "inspect the active camera". This would have to be much, much faster than encoding and passing an image.
So my question, in a nutshell:
Can I query in Obj-C 'is there a camera currently capturing?'
If so, how do I 'snapshot' the image from that currently running session?
Thanks!
You can access the Camera's preview capture stream by changing CameraCapture.mm in unity.
I suggest that you have a look at some existing plugin called Camera Capture for an example of how additional camera I/O functionality can be added to the capture session / "capture pipeline".
To set you off in the right direction. have a look at the function initCapture in CameraCapture.mm :
- (bool)initCapture:(AVCaptureDevice*)device width:(int)w height:(int)h fps:(float)fps
Here you will be able to add to the capture session.
And then you should have a look at the code sample provided by Apple on Facial Recognition :
https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
Cheers
Unity 3D allows execution of native code. In the scripting reference, look for native plugins. In this way you can display a native iOS view (with the camera view, possibly hidden depending on your requirements) and run Objective C code. Then return the results of eye detection to Unity if you need it in a 3D view.

Resources