I am making an augmented reality App to demonstrate the options in MacBook and I used the Vuforia SDK.
Here is my problem:
1) I tried with Vuforia Sample Core Feature and I used Image Targets. In Image targets it gives only one image at a time. I attached the output in below mentioned image.
2) My expectation is to show multiple text or image while capturing the real MacBook like below mentioned image.
Please guide me to achieve this.
Vuforia iOS SDK is using OpenGL ES loading 3D object,which's unfriendly to use.
You can use Scenekit, put your objects in a scene, set a rectangle node, put the model in four corners of a rectangle.When ImageTrack is success, load your scene.
How to use SceneKit in Vuforia? check this:https://github.com/yshrkt/VuforiaSampleSwift
Related
I am developing an augmented reality application where it recognizes objects and change the colors real time (example color of walls of a house). Can I use Vuforia SDK for this or are there any other better sdks to be used ?
Basically yes - Vuforia is able to detect pre-known images and let you know when something was detected, and what you do then is up to you. However, it depends on the objects you plan to recognize - Vuforia does not allow just any image to be detected, the image must have enough features. You can read about this here: https://developer.vuforia.com/library/articles/Solution/Natural-Features-and-Ratings
I would love to reproduce GoogleHeart-like 3D map flyover even when offline.
As of iOS 7 MapKit allows us to draw custom offline tiles. It also allows us to set a Camera in order to see the map in 3D or 2.5D as you may wish to call it.
I was wondering: can I draw a 3D shape like Apple does for its flyover feature, on my custom tiles?
I need to apply a "bump-map" to the map in order to get a GoogleHeart-like 3D view and I was wondering if Apple would allow me to do just that with iOS 7 and custom tiles rendering + camera settings.
Thanks
I have experimented pretty extensively with this, but there is no supported way to do this. Right now, Apple only offers raster tile-based overlay, albeit with automatic 2.5/3D transformation when overlaid on a map. Hopefully in the future they will support 3D API and/or custom, say, OpenGL-based augmentation to the map.
Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!
I have a Unity/iOS app that captures the user's photo and displays it in the 3D environment. Now I'd like to leverage CIFaceFeature to find eye positions, which requires accessing the native (Objective-C) layer. My flow looks like:
Unity -> WebCamTexture (encode and send image to native -- this is SLOW)
Obj-C -> CIFaceFeature (find eye coords)
Unity -> Display eye positions
I've got a working prototype, but it's slow because I'm capturing the image in Unity (WebCamTexture) and then sending it to Obj-C to do the FaceFeature detection. It seems like there should be a way to simply ask my Obj-C class to "inspect the active camera". This would have to be much, much faster than encoding and passing an image.
So my question, in a nutshell:
Can I query in Obj-C 'is there a camera currently capturing?'
If so, how do I 'snapshot' the image from that currently running session?
Thanks!
You can access the Camera's preview capture stream by changing CameraCapture.mm in unity.
I suggest that you have a look at some existing plugin called Camera Capture for an example of how additional camera I/O functionality can be added to the capture session / "capture pipeline".
To set you off in the right direction. have a look at the function initCapture in CameraCapture.mm :
- (bool)initCapture:(AVCaptureDevice*)device width:(int)w height:(int)h fps:(float)fps
Here you will be able to add to the capture session.
And then you should have a look at the code sample provided by Apple on Facial Recognition :
https://developer.apple.com/library/ios/samplecode/SquareCam/Introduction/Intro.html
Cheers
Unity 3D allows execution of native code. In the scripting reference, look for native plugins. In this way you can display a native iOS view (with the camera view, possibly hidden depending on your requirements) and run Objective C code. Then return the results of eye detection to Unity if you need it in a 3D view.
I am a very beginner in Objective-C and iOS programming. I spent a month to find out how to show a 3D model using OpenGL ES (version 1.1) on top of the live camera preview by using AvFoundation. I am doing a kind of augmented reality application on iPad. I process the input frames and show 3D object overlay with the camera preview in realtime. These was fine because there are so many site and tutorial about these things (Thanks to this website as well).
Now, I want to make a screen capture of the whole screen (the model with camera preview as the background) as the image and show in the next screen. I found a really good demonstration here, http://cocoacoderblog.com/2011/03/30/screenshots-a-legal-way-to-get-screenshots/. He did everything I want to do. But, as I said before, I am so beginner and don't understand the whole project without explanation in details. So, I'm stuck for a while because I don't know how to implement this.
Does anybody know any of good tutorial or any kind of source in this topic or any suggestion that I should learn more in order to do this screen capture? This will help me a lot to moving on.
Thank you in advance.
I'm currently attempting to solve this same problem to allow a user to take a screenshot of an Augmented Reality app. (We use Qualcomm's AR SDK plugged into Unity 3D to make our AR apps, which saved me from ever having to learn how to programmatically render OpenGL models)
For my solution I am first looking at implementing the second answer found here: How to take a screenshot programmatically
Barring that I will have to re-engineer the "Combined Screenshots" method found in CocoaCoder's Screenshots app.
I'll check back in when I figure out which one works better.
Here are 3 very helpful links to capture screenshot:
OpenGL ES View Snapshot
How to capture video frames from the camera as images using AV Foundation
How do I take a screenshot of my app that contains both UIKit and Camera elements
Enjoy