I am looking into VR using Native swift, found some interesting ways to port google cardboard SDK but want to reach out and see if anyone has experience in this, specifically capturing panoramic images using an iPhone and converting them into VR content.
Related
Goodday.
I am using the 0.6 Google Cardboard SDK in Unity 5.3 . When I publish apps to Android device this works perfect and looks crystal sharp when looking through the cardboard (Note4 & S6).
However when I start using the SDK for iOS (iPhone5 & 6S) the images look very blurry when looking through the cardboard (without the cardboard the display looks sharp as with the android). When I make the physical distance between the iphone and lenses the image becomes sharp again. Is there a variable of the Google Cardboard SDK that I should change for iOS?
Looking at it for quite some time now, but nothing seems to fix is and also cannot find anything on the internet about this.
Does somebody know what to do? Thanks in advance for the help!!
PS
As a test I did a clean install of the Google Cardboard Demo Scene. The same problem seems to occur in this app, so definitely is not because of my app.
Pps
In the Android API Reference the following parameter is available:
float getVerticalDistanceToLensCenter()
Is there something like that in the Unity Google Cardboard SDK?
Is it possible to recognise light patterns on iOS?
Is there a native iOS SDK to do so?
Use case:
Detect light patterns (e.g. on / off) using smartphone camera
Background information:
Apple has acquired last year Metaio so I presume at some point we will have such SDK, but for now I presume that the best way to achieve this is by using third party SDK or using image capturing and processing the image (if the images are simple enough so that a simple algorithm can be applied).
You could take a look at Kudan AR. https://www.kudan.eu/
They currently offer a SDK for iOS, not yet for Android. Their tracking quality is phenomenally good. But, I do not know if it is appropriate to your goals. It would be best if you talk to them and ask if their tracking would fit your needs.
I have been searching for ways to integrate Google Cardboard SDK in iOS. One way is using unity but i am looking for something through which i can directly integrate the cardboard sdk in ios and i want to view a panoramic image in that. Is there any way to do that?
I am looking for an iOS alternative for this project : Link Here
Okay, I've spent a few days getting CardboardSDK-iOS to do what I want (which is like the "Exhibit" demo in Google Cardboard App), and I'm pretty pleased with it. I'm guessing that it's pretty faithfull to the original, but since I'm not familiar with the original, I can't say for sure.
But I can say that it's not just a case of dropping a panoramic data set in. You need to do a bit of work to display the stereo image pair required, in OpenGL, depending on where the viewer has their head pointing. If you understand 3D transforms, how OpenGL works, and you've got your data prepared correctly, it should not be to onerus to get it working.
Of course - this is all done in xcode in ObjectiveC/C++ - and not in Java. And I'm assuming that by "panoramic image" you mean you have a hemispherical stereo data set which should give you something like what you see in Google's Cardboard "Urban Hike" demo.
Hope this helps !
I have integrated Google Maps SDK to an iOS application and I would like to display 3D Satellite maps. According to the documentation this should work just directly. I can tilt the view, but the displayed map remains flat (i.e. mountains do not show up in 3D as they do in Google Earth).
I have been searching extensively for this, but found no reference or mentioning whether it actually works or does not. Does anybody know whether the 3D maps (google SDK) do work on iOS and I am just hitting some limitation/wrong switch or whether they do not work?
As of SDK v1.8, tilted layers do appear to have some 3D elevation effects, but it's more subtle than Google Earth typically is.
Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!