iOS core location heading and course changing - ios

I am working on an app in which I show the location and direction a plane is heading when flying. I also want to then show labels of the cities one is flying by using augmented reality. I have everything set up and it was working ok when sitting still or driving in a car but when I used it on a plane something weird happened.
When sitting at the gate with the door open or closed the heading of the location icon shows the correct direction when the device is in landscape mode (home button to right). If I rotate the device to the right the plane icon rotates to the right the appropriate amount. Same with left. This is important because when I rotate the phone to the right or left and open the camera for augmented reality the correct cities show up in the correct place. This works completely fine even when we are taxiing on the runway.
However when we take off the function changes. Now no matter what way I change the rotation of the device the plane icon always points in the direction the plane is moving.
I am trying to figure out why this happens and was wondering if this is because at the slower speed sitting still or taxiing core location is using HEADING whereas when we take off enough information is being gather to use COURSE information.
I don't thing this is happening because I am in a faraday cage because it wouldn't work when at the gate or taxiing.
If it is in fact using Heading/Course information how do I compensate so the city labels are where they are supposed to be instead of constantly moving to the front of the plane?
am getting the user latitude, longitude and altitude and the correct heading to the cities so all that is working fine. It is just heading/course problem.
Here is some code just to show I have these things going.
locationManager = [[CLLocationManager alloc] init];
locationManager.delegate = self;
locationManager.desiredAccuracy = kCLLocationAccuracyKilometer;
locationManager.distanceFilter = kCLDistanceFilterNone;
[locationManager startUpdatingLocation];
[locationManager startUpdatingHeading];
Can I use gyroscope to tell what direction the phone is pointing and add it to the heading/course to get the labels in the correct place if the speed is above a certain amount?
Has anyone ran into the problem and solved it?

This is all well documented by Apple. The location manager uses your direction of motion to determine heading, and ignores phone attitude, if you are moving quickly. If you want the phone attitude independently you must use Core Motion instead.

Related

How to temporarily freeze a node in front of the camera using ARKit, SceneKit in Swift

I built a complete structure as a node (with its child nodes) and the user will walk through it using ARKit.
At some point, if the user cannot continue because of some real obstacle in the real world, I added a "pause" button which should freeze whatever the user currently sees in front of the camera, the user could then move freely to some other open space and when the user will release the pause button he/she will be able to resume where they left off (only someplace else in the real world).
A while ago I asked about it in the Apple Developer forum and an Apple Frameworks Engineer gave the following reply:
For "freezing" the scene, you could transform the anchor's position (in world coordinates) to camera coordinates, and then anchor your content to the camera. This will give you the effect that the scene is "frozen", i.e., does not move relative to the camera.
I'm currently not using an anchor because I don't necessarily need to find a flat surface. Rather, my node is placed at a certain position relative to where we start at (0,0,0).
My question is how do I exactly do what the Apple engineer told me to do?
I have the following code which I'm still stuck with. When I add the node to the camera (pointOfView, last line of the code below), it does freeze in place, but I can't get it to freeze in the same position and orientation as it was before it was frozen.
#IBAction func pauseButtonClicked(_ sender: UIButton) {
let currentPosition = sceneView.pointOfView?.position
let currentEulerAngles = sceneView.pointOfView?.eulerAngles
var internalNodeTraversal = lastNodeRootPosition - currentPosition! // for now, lastNodeRootPosition is (0,0,0)
internalNodeTraversal.y = lastNodeRootPosition.y + 20 // just so it’s positioned a little higher in front of the camera
myNode?.removeFromParentNode() // remove the node from the Real World view. Looks like this line has no effect and just adding the node as a child to the camera (pointOfView) is enough, but it feels more right to do this anyway.
myNode?.position = internalNodeTraversal // the whole node is moved respectively in the opposite direction from the root to where I’m standing to reposition the camera in my current position inside the node
// myNode?.eulerAngles = (currentEulerAngles! * -1) — this code put the whole node in weird positions so I removed it
myNode?.eulerAngles.y = currentEulerAngles!.y * -1 // opposite orientation of the node so the camera will be oriented in the same direction
myNode?.eulerAngles.x = 0.3 // just tilting it up a little bit to have a better view, more similar to the view as before it was locked to the camera
// I don’t think I need to change the eulerAngles.z
myNode!.convertPosition(internalNodeTraversal, to: sceneView.pointOfView) // I’m not sure I wrote this correctly. Also, this line doesn’t seem tp change anything
sceneView.pointOfView?.addChildNode(myNode!) // attaching the node to the camera so it will remain stuck while the user moves around until the button is released
}
So I first calculate where in the node I'm currently standing and then I change the position of the node in the opposite direction so that the camera will now be in that position. That seems to be correct.
Now I need to change the orientation of the node so that it will point in the right direction and here things get funky. I've been trying so many things for days now.
I use the eulerAngles for the orientation. If I set the whole vector multiplied by -1, it would show weird orientations. I ended up only using the eulerAngles.y which is the left/right orientation and I hardcoded the x orientation (up/down).
Ultimately what I have in the code above is the closest that I was able to get. If I'm pointing straight, the freeze will be correct. If I turn just a little bit, the freeze will be pretty close as well. Almost the same as what the user saw before the freeze. But the more I turn, the more the frozen image is off and more slanted. At some point (say I turn 50 or 60 degrees to the side) the whole node is off the camera and cannot be seen.
Somehow I have a feeling that there must be an easier and more correct way to achieve the above.
The Apple engineer wrote to "transform the anchor's position (in world coordinates) to camera coordinates". For that reason I added the "convertPosition" function in my code, but a) I'm not sure I used it correctly and b) it doesn't seem to change anything in my code if I have that line or not.
What am I doing wrong?
Any help would be very much appreciated.
Thanks!
I found the solution!
Actually, the problem I had was not even described as I didn't think it was relevant. I built the AR nodes 2 meters in front of the origin (-2 for the z-coordinate) while the center of my node was still at the origin. So when I changed the rotation or eulerAngles, it rotated around the origin so my nodes moved in a large curve and in fact also changed their position as a result.
The solution was to use a simdPivot. Instead of changing the position and rotation of the node itself, I created a translation matrix and a rotation matrix which was at the point of the camera (where the user is standing) and I then multiplied both matrices. Now when I added the node as a child of the camera (pointOfView) this would freeze the image and in effect show exactly what the user was seeing before it was frozen as the position is the same and the rotation is exactly around the user's standing position.

Calculate object position irrespective of camera orientation in augmented reality

Recently I am working on a game in iOS and trying to get a feature like Pokemon Go game where an object stays in a specific position and trying to find this object by camera view.
So, I read some tutorial and got some help from these articles:
Augmented Reality Tutorial for iOS from Ray Wenderlich Blog
Augmented Reality iOS Tutorial: Location Based from Ray Wenderlich Blog
From this tutorial I successfully completed to find object from camera view only in one device orientation. i.e. Only Landscape Left & Landscape Right are working but when I rotate my device from landscape to portrait, object runs away and can't see in camera when I pointing to that same position.
My problem: How can I calculate position for an Object irrespective of different camera orientation like landscape to portrait and vice versa? What is the mathematical calculation for handling this in different orientation??
the thing is pretty easy from mathematical point of view. To achieve that you need to know where the object is in real world. For example you need to have GPS coordinates of the virtual object. Based on that you need to get the azimuth of this location.
The next step is to count on which azimuth the user is looking through the camera. Calculate it in degrees so you will have a result <0, 360)
When you have both results you need to check if the azimuth you're looking is in your field of view.
For example if we assume that you`re looking at azimuth 0 and your field of view is 180 degrees than you see everything from in <90, 0), <0, 270>

North of Map to point north pole always : programatically ios

The case is the replicate the north pole indicator into a button and perform the rotation.I know this can be done by rotating the map view entirely.Is there any other neat way where the annotation stays normal to the ipad orientation even after rotation
EDIT
as #AlexWain says
mapView.userTrackingMode = MKUserTrackingModeFollowWithHeading
is an excellent solution ...but only possible when the user location displayed on map while rotating it
I need just to show a region and point the map towards north on the button click and sadly it is not the users current location,and is not in visible at that time
Apple has introduced that feature:
Use mapView.userTrackingMode = MKUserTrackingModeFollowWithHeading
This turns the map in the direction you are looking or moving.
In that case north on the map matches the direction of north.
Update:
If the location is outside of the view, there is NO neat way to do it.
You have to rotate the view, and apply the inverse rotation to all elements which should not be rotated (e.g annotations, pins, copyright label)

userLocation so different to CLLocation

I am using CoreLocation to plot my lat/long. This is displayed as Homer in the attached picture (http://i.imgur.com/IRRwOS0.png). I have kCLLocationAccuracyBest set:
self.locMgr.distanceFilter = kCLDistanceFilterNone;
self.locMgr.desiredAccuracy = kCLLocationAccuracyBest;
When I enable userLocation on the map (blue dot) there appears to be a pretty big difference between the accuracies. The blue dot is alot more accurate to my actual location.
How can I improve CoreLocations accuracy to position my MKAnnotations closers to my actual location?
Why am I getting this inaccuracy? userLocation is wifi based while Homer is GPS?
Screenshot:
Implement mapView:didUpdateUserLocation and, within that, set your annotation location to equal the mapView's userLocation.
I'm not entirely sure, but I assume it's probably due to mapview using a private framework for location.
China offsets the GPS for annotations other then users current location. More information can be found here:
http://www.sinosplice.com/life/archives/2013/07/16/a-more-complete-ios-solution-to-the-china-gps-offset-problem

Is There A Way To Get Street View Coordinates After Gesture?

I'm displaying a Street View (GMSPanoramaView) via Google Maps SDK for iOS in an iPhone/iPad app and I need to get the final position of a Street View after the user has navigated around in it using gestures, so that I can restore it to the exact position the user moved it to. This is extremely important to be able to do since the Street View is not accurate and often places an address hundreds of yards away from the actual one requested, forcing the user to tap and zoom to move the Street View in front of it. I don't see any delegate methods or API's to get updated coordinates. I can easily track the heading, pitch, zoom, and FOV via the GMSPanoramaViewDelegate didMoveCamera method, but that does not give me updated coordinates. Thus when I restore the Street View using the last heading, pitch, zoom, and FOV values, the Street View displays at the original location but with the heading, pitch, zoom, and FOV applied, which doesn't display the same position as the user expects. Does anyone know how to get (or track) these coordinates? Thanks!
Implement the panoramaView:(GMSPanoramaView*)view didMoveToPanorama:(GMSPanaorama*)panorama on the delegate.
On the GMSPanorama there's a CLLocationCoordinate2d called coordinate - voila.
EDIT
It also appears that at any point in time to can just get the panorama property from the GMSPanoramaView and get the coordinate from there.

Resources