ARKit – How to generate a worldMap for big environment? - ios

I'm developing 4-players game using ARKit. I know how to save and then to get a worldMap. It's not difficult.
sceneView.session.getCurrentWorldMap { worldMap, error in
guard let map = worldMap
else { self.showAlert(title: "Can't get current world map", message: error!.localizedDescription); return }
guard let snapshotAnchor = SnapshotAnchor(capturing: self.sceneView)
else { fatalError("Can't take snapshot") }
map.anchors.append(snapshotAnchor)
do {
let data = try NSKeyedArchiver.archivedData(withRootObject: map, requiringSecureCoding: true)
try data.write(to: self.mapSaveURL, options: [.atomic])
DispatchQueue.main.async {
self.loadExperienceButton.isHidden = false
self.loadExperienceButton.isEnabled = true
}
} catch {
fatalError("Can't save map: \(error.localizedDescription)")
}
}
But I don't know how to track with iPhone the following place (it's 50x50 meters) to generate this worldMap.
Could you give me an idea how to track this space?

If you wanna effectively move within the real environment with AR objects around you, you should use the whole developer's arsenal for precise positioning: Core Location framework (it provides services for determining a device’s geographic location, altitude, orientation, or position relative to a nearby iBeacon), iBeacon framework and certified hardware for it (interactive possibilities for location awareness of iBeacon is especially cool for indoor navigation), Vision and Core ML frameworks (designed for use of a trained machine learning model to classify input data like signs and images).
Before using the aforementioned frameworks you should track with iPhone the whole environment multiple times (every time adding new feature points to the existing array of features). Look at the picture below to imagine how point cloud looks like:
P.S.
In addition to the above, ARKit 4.0 offers to developers such tools as ARGeoTrackingConfiguration with ARGeoAnchors and Scene Reconstruction feature (works when your device equipped with a LiDAR scanner).

Related

The provided configuration is not supported on this device

I have Iphone 7, and am working on 3D face filters like tiktok but whenever i run on the app from xcode it shows error The provided configuration is not supported on this device and only shows black screen
You can't use all ARKit features on iPhone 7, some features require A12 processor, at least, and higher. For example: on iPhone 7 you can't use such features as People Occlusion, Body Tracking, simultaneous 3-Face Detection or Scene Reconstraction.
And remember: you have always to check with if or guard statement wheather a feature is supported on current device:
guard let config = arView.session.configuration as? ARWorldTrackingConfiguration
else {
print("You can't run this config on this device.")
}
guard ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)
else {
print("People Occlusion isn't supported here.")
}
config.frameSemantics.insert(.personSegmentationWithDepth)
arView.session.run(config)

Inaccurate face detection using ML Kit Face detection, doesn't work with selfies

I am creating a iOS app that uses the Firebase ML Kit Face Detection and I am trying to allow users to take a photo from their camera and check if there was a face in it. So I have followed the documentation and some youtube videos but it seems that it just doesn't work properly/accurately for me. I did some testing using a photo library not just pictures that I take, and what I found is it works well when I use selfies from google, but when I take my own selfies it never seems to work. I noticed when I take a selfie on my camera it does like a "mirror" kind of thing where it flips it, but I even took a picture of my friend using the front facing camera and it still didn't work. So I am not sure if I implemented this wrong, or what is going on. I have attached some of the relevant code to show how it was implemented. Thanks to anyone who takes the time to help out, I am a novice at iOS development so hopefully this isn't a waste of your time.
func photoVerification(){
let options = VisionFaceDetectorOptions()
let vision = Vision.vision()
let faceDetector = vision.faceDetector(options: options)
let image = VisionImage(image: image_one.image!)
faceDetector.process(image) { (faces, error) in
guard error == nil, let faces = faces, !faces.isEmpty else{
//No face detected provide error on image
print("No face detected!")
self.markImage(isVerified: false)
return
}
//Face Has been detected Offer Verified Tag to user
print("Face detected!")
self.markImage(isVerified: true)
}
}

Get the city and country name with latitude and longitude (iOS)

I use this function:
func getDataCity(tmpLat:Double,tmpLong:Double) {
let tmpCLGeocoder = CLGeocoder.init()
let tmpDataLoc = CLLocation.init(latitude: tmpLat, longitude: tmpLong)
tmpCLGeocoder.reverseGeocodeLocation(tmpDataLoc, completionHandler: {(placemarks,error) in
guard let tmpPlacemarks = placemarks else{
return
}
let placeMark = tmpPlacemarks[0] as CLPlacemark
// Country
guard let countryLocality = placeMark.country else{
return
}
// City
guard let cityLocality = placeMark.locality else{
return
}
print(countryLocality)
print(cityLocality)
})
}
When I use coordinates, for example, from Berlin/Germany
getDataCity(tmpLat: 52.52000659999999, tmpLong: 13.404953999999975)
the function works fine, it shows me the city and country. However I use coordinates from small cities or island, for example Djerba (island in Tunisia)
getDataCity(tmpLat: 33.8075978, tmpLong: 10.845146699999987)
the function doesn't print anything. Is there an explanation from Apple? What can I do about it?
To use the function in your own project, add CLLocationManagerDelegate to ViewController and don't forget to add the Privacy - Location When In Use Usage Description in your Info.plist
Apple's maps do not include a city for that location. Your code does recognize the country, but doesn't recognize a city there (in which case you abort).
If you open Apple Maps, you'll note that no cities are marked on that island, and searching for Houmt Arbah, Tunisia (the closest town) doesn't return a result in Apple Maps (it strangely returns Dahmani, which is a mainland town; I don't know why). It does know about Houmt Souq, but that's quite a ways from the given location.
The short and long of it is that Apple's map database doesn't know a lot about Tunisian geography. If you spend a little time looking around in Google Maps vs Apple Maps, you'll see that there are several parts of Tunisia that Apple Maps knows very little about. For example, if you look at Douz in satellite mode and then switch to map mode, you'll see that Apple's satellite imagery includes an entire village (Al-Qalah) that isn't mapped. And the street map of Douz itself (a town of 38k people), is, to put it bluntly, pathetic.
While Apple's maps have dramatically improved over the years, and in some areas they're now better than Google's maps, in most places Google tends to have far better information. If you need the best maps in arbitrary places, Google's maps are today the gold standard.

AGSFeature.attributes of ArcGIS Runtime 100 now giving parameters in its dictionary

I am sending a query of geometry to show the features selected on map and get the selected features.
Both the things are working okay but when i check the attribute dictionary of a feature it contains only 5 key/value pair but the same function in android returning 10 key/value pair.
I am making query like this
let query = AGSQueryParameters()
if let selectionGraphicGeometry = selectionGraphic?.geometry {
let geometry = AGSGeometryEngine.simplifyGeometry(selectionGraphicGeometry)
query.geometry = geometry
}
selectableLayer?.selectFeatures(withQuery: query, mode: AGSSelectionMode.add, completion: { (result, error) in
if let features = result?.featureEnumerator().allObjects {
for feature in features {
let keys = feature.attributes.allKeys
}
}
}
I dont know where i am doing this wrong
In the Version 100 Runtime, we take a slightly different approach for efficiency's sake.
Features will by default only include the minimal set of fields required for rendering and editing. When you are making a selection, you are working with those features, so that's why you're seeing the smaller set of fields.
If you need all the fields for your selected features, you should actually perform a query on your AGSServiceFeatureTable and select the features based off that.
Something like this:
let table = selectableLayer.featureTable as? AGSServiceFeatureTable
table?.queryFeatures(with: query, queryFeatureFields: .loadAll) { (result, error) in
guard error == nil else {
print("Error selecting features: \(error!.localizedDescription)")
return
}
guard let features = result?.featureEnumerator().allObjects else {
return
}
selectableLayer.select(features)
for feature in features {
let keys = feature.attributes.allKeys
print(keys)
}
}
What's odd is that you say you're seeing a different number of fields returned on Android than on iOS. Are you sure the Android app is displaying the same layer with the same renderer?
One other point: You might be better off using the Esri Community (GeoNet). The ArcGIS Runtime SDK for iOS Forum can be found here. Do post a question there if you are seeing different numbers of fields on iOS and Android with the same layer and renderer.
Hope this helps!
P.S. There are two related things you might want to know.
AGSArcGISFeature instances are now Loadable. So if you have an individual feature and you want to get all the fields for it from the source service, you can call load(completion:) on it. Or you could pass an [AGSArcGISFeature] array to the ASGLoadObjects() helper function. However, that function will make a separate network request for each feature so if your array isn't small, that could lead to a bad user experience.
You can set your AGSServiceFeatureTable.featureRequestMode to .manualCache. You then need to call populateFromServiceWithParameters() to load the precise data that you need locally (and as you pan and zoom the map, you will need to manually manage this cache). See here for more details.

Cannot apply style to GMSMapView

My code is as below:
/* Map */
mapView = GMSMapView()
mapView.delegate = self
mapView.mapType = .normal
do {
// Set the map style by passing the URL of the local file.
if let styleURL = Bundle.main.url(forResource: "styles", withExtension: "json") {
mapView.mapStyle = try GMSMapStyle(contentsOfFileURL: styleURL)
} else {
NSLog("Unable to find styles.json")
}
} catch {
NSLog("One or more of the map styles failed to load. \(error)")
}
I am following this tutorial on how to customize my Google map.
Above is my code for implementing the styles.json file. I added the file in my build bundle, and the code never throws an exception regarding not being able to parse my json file. It simply does not apply the style effects onto my map.
Any help would be appreciated. I am slowly dying inside!!!
Leaving an answer for anyone in the future that goes down my path:
Google map styling does not work for maps of South Korea. It even works in North Korea, but not South. South Korea's law prohibits map data from being exported to foreign datacenters.
source:
Yes , Korea does not support some features offered by Google Map due to national law. Google Map Korea can not be export map data for data centers abroad or including the ability to dynamically change the map image. Many South Korea Maps and services are limited to the domestic uses and Google is striving to make this a better service. For more details here's the original answer in Korean: original reply from Google Maps Korea

Resources