iOS augmented reality with compass and location - ios

I'm trying to develop a mini "Around Me" like using camera, compass and location. I would like to display place's images on my screen.
For the moment I have my location and my orientation with compass. I would like to know how can I determine the position of the place I want to display.
Thanks for your help ;)

Once you have relative distance and bearing, which you can determine from two points in the same coordinate space using algorithms found on this page, figuring out where a known coordinate is with respect to a known viewpoint is basically a perspective projection, the math is outlined on this Wikipedia article. The rotation of the camera is given by the compass, and the tilt by the accelerometer (the position is of course, GPS).
I'm trying to find a better document - there are a couple of extra things to consider - like the camera parameters etc, but this is a good starting point.
If it's too involved (like if you're not comfortable with rotation matrices) we can break it right down to the simple trig.

The code in the iPhone ARKit project does this, and quite a bit more. While you may not be able to use their complete library, it is a great reference on the subject of augmented reality.

Check out 3DAR, it lets you add an AR view to a MKMapView app very easily. There's a video tutorial on this process, as well as some sample code, on the 3DAR site, www.3dar.us

You can create a location based AR app in Junaio. It's an AR browser. Free to use and deploy in (as long as it's not a custom app and in Junaio).

Related

Detecting a real world object using ARKit with iOS

I am currently playing a bit with ARKit. My goal is to detect a shelf and draw stuff onto it.
I did already find the ARReferenceImage and that basically works for a very, very simple prototype, but the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image). With that marker I would know the position of an edge and then I'd know the physical size of my shelf and know how to place stuff into it. So that would be ok, but I think small and simple markers will not work, right?
But ideally I would not need a marker at all.
I know that I can detect e.g. planes, but I want to detect the shelf itself. But as my shelf is open, it's not really a plane. Are there other possibilities to find an object using ARKit?
I know that my question is very vague, but maybe somebody could point me in the right direction. Or tell me if that's even possible with ARKit or if I need other tools? Like Unity?
There are several different possibilities for positioning content in augmented reality. They are called content anchors, and they are all subclasses of the ARAnchor class.
Image anchor
Using an image anchor, you would stick your reference image on a pre-determined spot on the shelf and position your 3D content relative to it.
the image needs to be quite complex it seems? Xcode always complains if I try to use something a lot simpler (like a QR-Code like image)
That's correct. The image needs to have enough visual detail for ARKit to track it. Something like a simple black and white checkerboard pattern doesn't work very well. A complex image does.
Object anchor
Using object anchors, you scan the shape of a 3D object ahead of time and bundle this data file with your app. When a user uses the app, ARKit will try to recognise this object and if it does, you can position your 3D content relative to it. Apple has some sample code for this if you want to try it out quickly.
Manually creating an anchor
Another option would be to enable ARKit plane detection, and have the user tap a point on the horizontal shelf. Then you perform a raycast to get the 3D coordinate of this point.
You can create an ARAnchor object using this coordinate, and add it to the ARSession.
Then you can again position your content relative to the anchor.
You could also implement a drag gesture to let the user fine-tune the position along the shelf's plane.
Conclusion
Which one of these placement options is best for you depends on the use case of your app. I hope this answer was useful :)
References
There are a lot of informative WWDC videos about ARKit. You could start off by watching this one: https://developer.apple.com/videos/play/wwdc2018/610
It is absolutely possible. If you do this in swift or Unity depends entirely on what you are comfortable working in.
Arkit calls them https://developer.apple.com/documentation/arkit/arobjectanchor. In other implementations they are often called mesh or model targets.
This Youtube video shows what you want to do in swift.
But objects like a shelf might be hard to recognize since their content often changes.

Any way to make AR.js camera less sensitive to movement?

After testing with default and custom marker/model of various size and distance, I concluded that the reason my AR models are having seizure (jittering/flickering/shaking like mad) is because of my hand movement. When the (phone) camera is at rest, the model is stable when the camera is stable.
Because the intention is to share the end product with the public (or anyone whose phone supports WebRTC), I can't calibrate the AR camera, because that would only fix my (phone) problem, not the other audience's.
Is there a setting in AR.js or ARToolkit that governs the sensitivity of the camera?
In case you are facing hard mad movements/hyper-sensitivity shaking of images with Ar.JS, and you are using multiple markers in the same page, the solution is to add a <a-entity camera></entity> inside the <a-scene> that contains the markers.
This avoids the automatic camera(s) created by a-frame, and makes everything more stable.
You could use the object position and orientation from AR.js and average that over a few frames to smooth things out.

Placing objects automatically when ground plane detected with vuforia

I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.

Creating a 360 photo experience on iOS mobile device

I am interested in VR and trying to get a bit more information. I want to create a similar experience on iOS where I can take a 360 image and be able to view it on a iOS device by tilting the phone around and using the devices gyroscope, as I tilt the phone around it will pan around the 360 image (like on google street view where you can use the tilt gesture).
And something similar to this app: http://bubb.li/
Can anybody give a brief overview how this would be do-able, any sources that could help me achieve this, API's etc...?
Much appreciated.
Two options here: You can use a dedicated device to capture the image for you, or you can write some code to stitch together multiple images taken from the iOS device as you move it around a standing point.
I've used the Ricoh Theta for this (no affiliation). They have a 360 viewer in the SDK for mapping 360 images to a sphere that works exactly as you've asked.
Assuming you've figured out how to create 360 photospheres, you can use Unity and Unreal, and probably development platforms to create navigation between the locations you captured.
Here is a tutorial that looks pretty detailed for doing this in Unity:
https://tutorialsforvr.com/creating-virtual-tour-app-in-vr-using-unity/
One pro of doing this in something like Unity or Unreal is once you have navigation between multiple photo spheres working it's fairly easy to add animation or other interactive elements. I've seen interactive stories done with 360 video using this method.
(I see that the question is from a fairly long time ago, but it was one of the top results when I looked for this same topic)

camera overlay change with bearing and elevation

Folks,
I am trying to get a utility as shown in the picture below. Basically the camera display window covers part of the device's screen and a list of points that are connected by a curve or straight line are presented over the camera view as an overlay. I understand this can be drawn using quartz but this is less than half of my problem.
The real issue is that the overlay should present different points as the bearing and elevation changes.
For example:
if the bearing has to change +5 degrees and elevation +2 degrees, then PT1 will be next to the right edge of the camera view, PT2 will also move to the right and PT3 will be visible.
Another movement that changes the bearing +10 degrees would make PT1 not visible, PT2 at the right, PT3 middle and PT4 on the left edge of the camera view.
My questions after the picture:
Is it possible to have a view that is substantially larger than the size of the camera view (as shown below) and use some methods (I need to research these) to move the view when bearing/elevation changes? Is it recommended performance wise?
Is quartz the way to go here? What else do I need (other then of course AVFoundation for the camera and corelocation/motion)? Since my application is only iOS 7 I can use any new methods/APIs exclusive to iOS 7.
Aside from raywendelrich's tutorial on the augmented reality game, are there any tutorials that you know of that could help me with this endeavor?
Have a look at the following, each article or link has different key things required to make your final product. You eventually will be using a combination of geolocation, the compass/or the iphone's gyroscope data coming in.
Reading all the references combined and implementing them one by one in different projects will give you a solid start on how to then combine it all together to create your application. But first you need to get a solid understanding on how to manipulate the knowledge you will learn and how you can then apply it to create your project.
References:
A cool project from Ray Winderlech teach you how to use location gps coordinates in your application
Augmented reality location based tutorial
The next two links show you how to grab gyroscope data to find out the pitch, yaw and rotation and find out the device current position in space.
Apple gyroscope example app
Another core motion gyroscope example
Will teach you how to use the compass
Ray Winderlichs augmented reality compass tutorial for ios
Here's some more augmented reality stuff on overlaying stuff on the camera view
iPhone AR Toolkit
Augmented reality marker tracking tutorial

Resources