How to rotate Apple map in driving direction in ios? - ios

I want to rotate apple map based on user's driving direction,
Any idea or solution to rotate map like that way?
I can collect array of last 5-10 moved location, but based on it, Can I calculate heading or anything else to rotate map?

Related

Spherical 360 photo coordinates on SCNSphere

Best way to explain what I want to do is Google Streets. I have a spherical 360 photo rendered using CTPanoramaView and that works nicely. Now, I need what Google Streets does, a way to detect a tap position on that 360 image and change the image. To do that, I need to somehow "map" coordinates from 2D image to location in 3D sphere. For example, tap on this chandelier bellow opens up another photo (outer space :)
So, disregarding resolution of the photo (or not, pixel resolution can also work, say that photo is 4096 x 2048, I need to convert tap on sphere to tap in that resolution), taking for example that both x and y can go from 0 to 1, I want to detect tap on x=0.247 and y=0.4 on this photo:
By converting tap from SCNSphere on which this image is rendered to. See this screenshot:
What I currently have is ability to detect tap position in sphere by running hitTest on this SCNSphere where photo is rendered and receiving SCNHitTestResult.
SCNHitTestResult seems like a starting point, but I don't really know how to convert it's coordinates to this point I'm looking for.
Thanks in advance.

How to put an object in the air?

It seems HitResult only gives us intersection with a surface (plane) or a point cloud. How can I get a point in the middle of air with my click, and thus put an object floating in the air?
It really depends on what you mean by "in the air". Two possibilities I see:
"Above a detected surface" Do a normal hit test against a plane, and offset the returned pose by some Y distance to get the hovering location. For example:
Pose.makeTranslation(0, 0.5f, 0).compose(hitResult.getHitPose())
returns a pose that is 50cm above the hit location. Create an anchor from this and you're good to go. You also could just create the anchor at the hit location and compose with the y translation each frame to allow for animating the hover height.
"Floating in front of the current device position" For this you probably want to compose a translation on the right hand side of the camera pose:
frame.getPose().compose(Pose.makeTranslation(0, 0, -1.0f)).extractTranslation()
gives you a translation-only pose that is 1m in front of the center of the display. If you want to be in front of a particular screen location, I put some code in this answer to do screen point to world ray conversion.
Apologies if you're in Unity/Unreal, your question didn't specify so I assumed Java.
The reason why you see so often a hit result being interpreted as the desired position by the user is that actually there is no closed-form solution for this user interaction. Which of the infinite possible positions along the ray starting from the camera pointing towards the scene was desired by the User? 2D coordinates from a click still leave the third dimension undefined.
As you said "middle of the air", why not take the centre between the camera position and the hitresult?
You can extract the current position using pose.getTranslation https://developers.google.com/ar/reference/java/com/google/ar/core/Pose.html#getTranslation(float[],%20int)

Calculate object position irrespective of camera orientation in augmented reality

Recently I am working on a game in iOS and trying to get a feature like Pokemon Go game where an object stays in a specific position and trying to find this object by camera view.
So, I read some tutorial and got some help from these articles:
Augmented Reality Tutorial for iOS from Ray Wenderlich Blog
Augmented Reality iOS Tutorial: Location Based from Ray Wenderlich Blog
From this tutorial I successfully completed to find object from camera view only in one device orientation. i.e. Only Landscape Left & Landscape Right are working but when I rotate my device from landscape to portrait, object runs away and can't see in camera when I pointing to that same position.
My problem: How can I calculate position for an Object irrespective of different camera orientation like landscape to portrait and vice versa? What is the mathematical calculation for handling this in different orientation??
the thing is pretty easy from mathematical point of view. To achieve that you need to know where the object is in real world. For example you need to have GPS coordinates of the virtual object. Based on that you need to get the azimuth of this location.
The next step is to count on which azimuth the user is looking through the camera. Calculate it in degrees so you will have a result <0, 360)
When you have both results you need to check if the azimuth you're looking is in your field of view.
For example if we assume that you`re looking at azimuth 0 and your field of view is 180 degrees than you see everything from in <90, 0), <0, 270>

Rotate GMSMapView based on compass

I'm working on an app where manual interaction with map is not allowed. The only way to change the map is as user is moving towards a direction. So if a user rotates the phone rotates and so based on the compass in iPhone the map should rotate automatically rather than user rotating it to the direction by 2 fingers gestures. Here is a picture that clear the idea a little more:
Consider the red dot as my location and as I rotate my phone I want google map to rotate with it. So my question is that how this can be achieved? As there is a method I found [_myMap animateToViewingAngle:45]; after searching but this didn't did what I was looking for.
EDIT I thought of this is there any way we can convert lat and long values to angle?
You can use the course property from the CLLocation and pass it to the animateToBearing method of your GMSMapView. From the documentation:
Swift
var course: CLLocationDirection { get }
Objective-C
#property(readonly, nonatomic) CLLocationDirection course
Discussion
Course values are measured in degrees starting at due north and continuing clockwise around the compass. Thus, north is 0 degrees, east is 90 degrees, south is 180 degrees, and so on. Course values may not be available on all devices. A negative value indicates that the direction is invalid.

Is There A Way To Get Street View Coordinates After Gesture?

I'm displaying a Street View (GMSPanoramaView) via Google Maps SDK for iOS in an iPhone/iPad app and I need to get the final position of a Street View after the user has navigated around in it using gestures, so that I can restore it to the exact position the user moved it to. This is extremely important to be able to do since the Street View is not accurate and often places an address hundreds of yards away from the actual one requested, forcing the user to tap and zoom to move the Street View in front of it. I don't see any delegate methods or API's to get updated coordinates. I can easily track the heading, pitch, zoom, and FOV via the GMSPanoramaViewDelegate didMoveCamera method, but that does not give me updated coordinates. Thus when I restore the Street View using the last heading, pitch, zoom, and FOV values, the Street View displays at the original location but with the heading, pitch, zoom, and FOV applied, which doesn't display the same position as the user expects. Does anyone know how to get (or track) these coordinates? Thanks!
Implement the panoramaView:(GMSPanoramaView*)view didMoveToPanorama:(GMSPanaorama*)panorama on the delegate.
On the GMSPanorama there's a CLLocationCoordinate2d called coordinate - voila.
EDIT
It also appears that at any point in time to can just get the panorama property from the GMSPanoramaView and get the coordinate from there.

Resources