Im developing an app using Google ARCore.
I want to create an experience where you see objects in AR near you. for that, I need to place pre-created objects according to the user's location.
Google provides an API for that in ARCore.Geospatial
garSession.createAnchorOnTerrain(
coordinate: coordinate, // long, lat degrees
altitudeAboveTerrain: 0,
eastUpSouthQAnchor: eastUpSouthQTarget
)
My question is how to get the eastUpSouthQTarget parameter.
The example app gets this parameter using a call to the same session.
// Update the quaternion from landscape orientation to portrait orientation.
let rotationZquat = simd_quaternion(Float.pi / 2, simd_float3(0, 0, 1))
let geospatialTransform = self.garFrame.earth?.cameraGeospatialTransform
let eastUpSouthQPortraitCamera = simd_mul(geospatialTransform.eastUpSouthQTarget, rotationZquat)
But I need to know it beforehand and add objects from the data.
I can find long and lat on the google-maps.
But how can I get orientation? And how to transform it to this format?
The documentation explains the notation, but it doesn't help in understanding how to work with that outside the app.
It also shows that the previous method, getHeading(), which used to give you more traditional coordinates, is now deprecated. And the depreciation notice doesn't give clear instructions on how to use the new one.
I can find something connected to this transformation, but I can't see the whole picture.
For example, matlab func eul2quat. But it doesn't seem to use location, which is, as far as I can tell, important.
Related
I'm having huge issues at placing a 3d model on a specific gps position. Before opening the AR ViewController im downloading the model which means I also have the file path to the model. I've tried different things to place the downloaded model on a fixed gps location in AR.
I'm using Project Dent to achieve this but I'm not sure if it is even possible with that library.
I've tried loading the model (usdz file) into a SCNNode and then attaching it to a LocationNode which I then add to the sceneLocationView.
// Load model
let model = MDLAsset(url: (self.arConfig?.models.first!)!)
model.loadTextures()
let modelNode: SCNNode = SCNNode(mdlObject: model.object(at: 0))
modelNode.scale = SCNVector3(x: 30, y: 30, z: 30)
// Create location node and add the model node to it
let locationNode = LocationNode(location: location)
locationNode.addChildNode(modelNode)
locationNode.continuallyUpdatePositionAndScale = true
locationNode.continuallyAdjustNodePositionWhenWithinRange = true
locationNode.scaleRelativeToDistance = true
locationNode.ignoreAltitude = true
// Add the location node for a given location which includes the model to the scene
sceneLocationView.addLocationNodeWithConfirmedLocation(locationNode: locationNode)
The result is that I'm able to see the model in front of my camera but it's not fixed to the specific location. It keeps moving with my camera when I move the camera up, to the side or go back a little.
I also looked into this answer. I used the code snippet for the "ThreeDNode" and tried initializing a SCNScene using the downloaded model which I then pass to the "ThreeDNode" class. I've tried initializing the scene using SCNScene(url, options) and converted the usdz file to a .scn file. However the scenes root node doesn't contain any geometry so the "ThreeDNode" class cannot set it to it's own.
I also searched for other frameworks which could help me achieve this but couldn't find any.
I used ProjectDent a few years ago, and found in general GPS coordinates would not be precise enough to place a model the way I expected. Moreover, the compass on the phone is not accurate enough for you to be able to place your model in the place where you expect with any reasonable accuracy, and the error aggregates the further away you place your model.
The only way to make it really stable and accurate is to perform localization within a mapped environment, which is possible in only a few cities using ARGeoAnchor .
So, my idea of turning off the Geolocation functionality in an Openlayers 3.9.0 map is to have a toggle button that when is clicked it stops the tracking and removes the feature from the geolocation layer
geolocation.setTracking('false');
featuresOverlay.getSource().clear();
and then to turn it on again it turns the tracking on, adds a feature to the geolocation layer, sets its coordinates and re-centers the map
geolocation.setTracking('true');
featuresOverlay.getSource().addFeature(positionFeature);
var coordinates = geolocation.getPosition();
positionFeature.setGeometry(coordinates ? new ol.geom.Point(coordinates) : null);
view.setCenter(coordinates);
Well, this technically does not count as turning on/off the geolocation because it removes all the visual elements, it does not actually turns on/off the API. Is there such a possibility, or the above are enough?
Sure it does, after one minor change.
Assuming that geolocation in your code references an instance of ol.Geolocation, calling geolocation.setTracking(false) will make the geolocation call clearWatch on the browsers geolocation API. The relevant code is here.
However, setTracking expects a boolean value. You send the string 'false', which is interpreted as a truthy value (since it's a non-empty string). Remove the quotation marks from the setTracking paramters, and it should work as expected.
I am working on building a game where a user is able to build their own map to explore. Using GameKit and Game Center is it possible to challenge another user to play that map that was just created?
If so, how does this work so that the other user can see the graphics, data, etc that was created within another users game instance?
It would depend entirely on how your game is designed. GameCenter doesn't really care what sort of data you are sending in a match, as long as it adheres to the limits Game Center Places on messages.
The common factor is that you need to find a way to serialize your custom level into a format that can be sent over Game Center, then write deserialization methods to get the data into the map's format. If your maps are persistent, you can probably just send the file (unless you are using a very inefficient representation) and then use your regular methods for making a map out of the file.
For simplicity, lets say that you're making a turn-based game with a Minecraft-like board, so the only thing that you can edit is the height of each block. You might send a special turn with the JSON-serialized equivalent of
NSArray* board = //Array of arrays of NSNumbers with the heights of each block.
NSArray* turn = #[#"This is the turn that sends the board", board];
//serialize this into a NSData with JSON then send it with endTurnWithMatchData:
Then in your receivedTurnEventForMatch: method, you test for the special string in index zero that turn, or just expect it to be the first turn, and then use it to create the board, and programmatically end your current turn with no action if it is the other player's turn, or let the player receiving the custom map make his first turn.
For more ambitious custom content like images you would have to get the maximum size turn you can currently send and then break up the images into chunks to send.
From the server response comes to me with an update of coordinates every 5 second. This coordinates I add to map like pins. But existing coordinates of current object not removes, it coordinates just add as new. How to update coordinates of current object and remove object that have no coordinates for now?
I use native map. If you give any simple I'll be happy
The response from the server, I receive the coordinates of users if the user is, if not - nothing comes. I need an approach without removing all the coordinates and with management (delete or update), depending on the receipt of a new member eoordinaty or delete it
Use this :
[mapView removeAnnotations:mapView.annotations]
Before adding your annotation.
Hope it works :)
I am starting to use the MapBox iOS SDK.
Is there any possible way to query the MapView by a coordinate and get back the terrain-type (water, land) as a result?
I've been reading the API doc for quite a while now, but could not figure it out.
I know that there are (interim) solutions available to use a Google webservice, but I need this to work offline.
I am not bound to MapBox (but I like it) though, thank you for any hint!
No need to delve into runtime styling (see my other answer, false lead): very simple method using mapView.visibleFeatures(at: CGPoint, styleLayerIdentifiers: Set<String>) equivalent for javascript API is queryRenderedFeatures.
func mapView(_ mapView: MGLMapView, regionDidChangeAnimated animated: Bool)
{
let features = mapView.visibleFeatures(at: mapView.center, styleLayerIdentifiers: ["water"])
print(features)
}
Example output when moving around:
[]
[]
[]
[<MGLMultiPolygonFeature: 0x170284650>]
If empty result: no water, if polygon: water.
It seems to be possible:
Found on https://www.mapbox.com/ios-sdk/api/3.5.0/runtime-styling.html
There is the possibility of adapting the UI according to the position of the user (park, city, water, etc.) Unfortunately I don't know how! (will update as soon as I find out)
Map interactivity You can customize the map to the point of having it respond dynamically based on the actions your users are taking.
Increase the text size of streets while a user is driving, emphasize
points of interest tailored to a user’s preferences, or change your UI
if users are at parks, trails, landmarks, or rivers.
UI GIF demo
I made a sample code that can help you a little bit.
https://github.com/P0nj4/Coordinates-on-water-or-land
giving a coordinate, the app checks with google if it's land or water.