Swift: 360 image with Device Motion - ios

I’m trying to make an iOS app for viewing 360 degrees images on Google Cardboard (or any other VR glasses) and i’m kinda stuck. What I want is the same thing as the app Round.me (https://round.me/tour/19890/view) does, using Device Motion.
I don’t have much experience with Swift, so I’m trying to figure it out what library should I use. I tried to move the UIView changing the offset, but with no success.
Anyone can give me a tip or recommendation on where should I start and what should I use?

You may want to take a look at this the Panorama library. It rotates 360 images manually, but it also has VR capabilities
I didn't test its VR capacity, but manual scroll seems to work perfectly, so it should be a good start for you. It's written in Objective-C, but installing it with CocoaPods, it should work. For me though, I had to edit the library and add #import <GLKit/GLKit.h> in PanoramaView.h file.
Also go to Link Binary with Libraries and add these libraries
https://github.com/robbykraft/Panorama.

Related

Apple's AVCamera Night Mode

I've been building a camera app using the AVFoundation and wanted to add NightMode support to it.
Apple features a stunning implementation for this, more about it can be read here: https://www.macrumors.com/guide/night-mode/
Now, the only "properties" I can find about "night mode" would be low-light boost, which only seems to be an iPhone 5 feature
https://developer.apple.com/documentation/avfoundation/avcapturedevice/1624602-islowlightboostenabled
https://forums.developer.apple.com/thread/52574
I'd like to take advantage of the apple native night mode and implement it into my camera app. Is there any way to do so? Is this a feature that might be added to the SDK within the next releases? Did I miss something in the SDK?
Well, some time ago I tried to replicate what Apple calls NightMode.
Of course, there is any reference inside AVFoundation as occurs with HDR or SmartHDR.
The point is, what does NighMode do? How Apple archive this "effect"?
Answering this questions points out why there is anything inside AVFoundation.
They basically blend multiple exposures replicating what in Photography is called "Long Exposure". The only way to do so would be to extract frame per frame from didOutputSampleBuffer and than find a (good) way to blend.
Of course there is a lot more involved, since each frame of the buffer is already pre-worked by isp.

How to reproduce watchOS closing activity ring animation with sparks on iOS?

How to best reproduce the closing activity ring animation from watchOS 4 on iOS? Im particularly interested in the rotating sparkling effect.
Here a still frame of the animation I'm talking about:
and here is a video of it.
Is it possible to implement something like this with Core Animation?
Here at the university of science in zürich in the usability lab, we use:
sketch or illustrator or designer.gravit.io for designing the svg sketches.
than we import it in after effects or in Haiku.ai for animating
and export it as .json for airbnbs animations library Bodymovin or also known as Lottie. Therefor are libraries for web, android and ios available.
The advantage of this solution over #bryanjclark "exported it as a series of images" is that the animation is sharp in every resolution (svg), it is only one .json file and you have the full control over its speed and frames.
Otherwise if you really want to do it with code only, give a look at this Article, done with OpenGL ES2.0.
Or with the AnimationCore example in this SO Answer.
I’m nearly-certain that is a pre-rendered animation, not something generated on-device. (If it is generated on-device, it’s not something you’d have API access).
I’d bet that:
a designer worked it up in a tool like AfterEffects,
exported it as a series of images,
then the developers implemented it using something like WKImageAnimatable
You can see other developers using WKImageAnimatable to build gorgeous animations in their WatchKit apps - for example, Cultured Code’s app Things (watch the video there!) has some really terrific little animation flourishes that (almost-definitely) use WKImageAnimatable under-the-hood!

Creating a 360 photo experience on iOS mobile device

I am interested in VR and trying to get a bit more information. I want to create a similar experience on iOS where I can take a 360 image and be able to view it on a iOS device by tilting the phone around and using the devices gyroscope, as I tilt the phone around it will pan around the 360 image (like on google street view where you can use the tilt gesture).
And something similar to this app: http://bubb.li/
Can anybody give a brief overview how this would be do-able, any sources that could help me achieve this, API's etc...?
Much appreciated.
Two options here: You can use a dedicated device to capture the image for you, or you can write some code to stitch together multiple images taken from the iOS device as you move it around a standing point.
I've used the Ricoh Theta for this (no affiliation). They have a 360 viewer in the SDK for mapping 360 images to a sphere that works exactly as you've asked.
Assuming you've figured out how to create 360 photospheres, you can use Unity and Unreal, and probably development platforms to create navigation between the locations you captured.
Here is a tutorial that looks pretty detailed for doing this in Unity:
https://tutorialsforvr.com/creating-virtual-tour-app-in-vr-using-unity/
One pro of doing this in something like Unity or Unreal is once you have navigation between multiple photo spheres working it's fairly easy to add animation or other interactive elements. I've seen interactive stories done with 360 video using this method.
(I see that the question is from a fairly long time ago, but it was one of the top results when I looked for this same topic)

Is it possible to view panoramic image using google cardboard in iOS application?

I have been searching for ways to integrate Google Cardboard SDK in iOS. One way is using unity but i am looking for something through which i can directly integrate the cardboard sdk in ios and i want to view a panoramic image in that. Is there any way to do that?
I am looking for an iOS alternative for this project : Link Here
Okay, I've spent a few days getting CardboardSDK-iOS to do what I want (which is like the "Exhibit" demo in Google Cardboard App), and I'm pretty pleased with it. I'm guessing that it's pretty faithfull to the original, but since I'm not familiar with the original, I can't say for sure.
But I can say that it's not just a case of dropping a panoramic data set in. You need to do a bit of work to display the stereo image pair required, in OpenGL, depending on where the viewer has their head pointing. If you understand 3D transforms, how OpenGL works, and you've got your data prepared correctly, it should not be to onerus to get it working.
Of course - this is all done in xcode in ObjectiveC/C++ - and not in Java. And I'm assuming that by "panoramic image" you mean you have a hemispherical stereo data set which should give you something like what you see in Google's Cardboard "Urban Hike" demo.
Hope this helps !

Is is possible to use Vuforia without a camera?

Is it possible to use Vuforia without a camera for image tracking?
Basically I would like a function I could call with an image as a indata parameter and coordinates of a image target as a result. Does that exist?
It is unfortunately not possible. I've been looking for such an option myself several times while working on a Moodstocks (image recognition SDK) / Vuforia mashup (see these 2 blog posts if you are interested in it), but the Vuforia SDK prevents the use of any other source than the camera.
I guess the main reason for this is that the camera management is fully handled internally by the Vuforia SDK, probably in order to make it easier to use as managing the camera by ourselves is at best a boring task (lines and lines of code to repeat in each project...), at worst a huge pain in the ass (especially on Android where there are sometimes devices than don't behave as expected).
By the way, it looks to me like the Vuforia SDK is not the best solution you can find for your use case: it is mainly an augmented-reality SDK, focussed on real-time tracking, which imply working with a camera stream... so using it to do "simple" image recognition looks really overkill!

Resources