I would like to develop a Location-Based AR App - augmented-reality

I would like to develop an app like a tourist guide that shows the location of historical places and cultural inheritances. Basically, when they scan the spesific historic building with android device it should be display the information about it. Also I want to mark the places with a POI. According to my researches it suppose to use features of both Location-Based and Marker-Based AR. For this kind of app, which tools and frameworks should I use? Can i use ARCore?
I haven't tried anything yet i just want to make sure that i am going to use which tools.

Related

Location-based app to match users - do I need precise geolocation?

Imagine an app like Tinder but not for dating. It connects people in the same city/town/country, not based on exact location. So just one broad location check e.g. on app start-up is enough.
If I only need these 3 levels of location (city/town/country), do I still need to go with usual APIs like Geolocation API or Core Location? Or is there a way to simplify it, since no such granularity is needed.
Or would trying to simplify it only lead to more custom work?

Is it possible to create IndoorMaps like airports from Apple with IMDF?

Im quite confused about creating an IndoorMap for our company to display it on iPad's.
Regarding to the new IndoorMaps-Program from Apple I thought I found the perfect solution. Nice examples are shown at some airports in the official Maps App.
But the more I dig deeper into this topic, I understand less and less.
At airport indoor maps, provided by Apple directly, the max. zoom level is nearly disabled. You can zoom insanely deep into the indoor map (close to 5m). When I am creating a map with MapKit I can zoom not even close enough to a building, to fill it fullscreen. Is it even possible to get that zoom level as a private developer?
In the documentation about IMDF-files they mentioned, that as an private developer,
you will need to create IMDF yourselves. The good news is there are a number of third party platforms and tools that can make creating and updating IMDF easier. See section on third party platforms below.
So I studied these third party platforms and noticed, that I can only create IMDF, when I am using it together with their provided software and map-SDK's. So I would make me dependent to another platform and SDK, which isn't in my interest. I didn't found anything about one platform/software to simply convert some floor plans to IMDF. Where is the trick? Am I thinking to complicated?
The most confusing part of IMDF is: What is IMDF exactly? Third party platforms advertise it with the new file for indoor maps. Apple on the other hand, mentioned that
IMDF is a data model that is used to describe an indoor space. IMDF is output as a set of GeoJSON files.
So is it a file or a format? Would it be enough to get a tool to create GeoJSON-files?
Maybe somebody got a little bit experience with this topic and can get me some hints for my questions or can even suggest another simpler and better solution to display indoor maps for example warehouses.
The Indoor Mapping Data Format is Apple's way of modeling how to map indoor spaces. An IMDF archive is one manifest.json file with many .geojson files which are tightly related to the id property in some of the GeoJSON.
See the IMDF Sandbox link to get a sample IMDF archive from Apple
If you are building indoor maps for your example warehouse, and you want to stay independent of other parties, then you need to create your own GeoJSON. You would use the IMDF Sandbox from Apple to validate and/or report any issues you may have with your IMDF archive.
You mentioned a few links above, let me summarize your links plus a couple of others that we used to learn development of our IMDF solution.
Indoor Mapping Data Format — https://register.apple.com/resources/imdf/
Introducing the Indoor Maps Program - https://developer.apple.com/videos/play/wwdc2019/245
Video 245 from WWDC 2019 describes the IMDF Sandbox — https://register.apple.com/indoor/imdf-sandbox
Because building and understanding IMDF can be a bit more complicated, there is the IMDF Sandbox, a tool for visualization, archive inspection, editing and experimentation with an IMDF archive.
Adding Indoor Maps to your App and Website — https://developer.apple.com/videos/play/wwdc2019/241
Video 241 from WWDC 2019 gives sample MapKit & MapKit JS projects
Displaying an Indoor Map — https://developer.apple.com/documentation/mapkit/displaying_an_indoor_map
Displaying Indoor Maps with MapKit JS — https://developer.apple.com/documentation/mapkitjs/mapkit/displaying_indoor_maps_with_mapkit_js
I apologize that this was downvoted by someone, it's likely that someone deemed it not a programming question. I thought it a worthwhile question enough to answer.
To answer your questions:
What is IMDF exactly?
See https://register.apple.com/resources/imdf/Reference/#archives
Datasets MUST be delivered as ZIP compressed archives
Archives MUST contain a Manifest object supplied in a dedicated file named manifest.json
Features MUST be packaged as homogenous GeoJSON FeatureCollections
Is it possible to create IndoorMaps like airports from Apple with IMDF?
Yes, See the IMDF Sandbox link above, as they have an example of Victoria YYJ International Airport
Is it even possible to get that zoom level as a private developer?
Zoom level or MKMapView.CameraZoomRange would have to be determined empirically.
If you are familiar with GIS solutions, ESRI has a own indoor mapping template using which you can create Indoor maps and then export them into IMDF format.
The process should be:
Map you components and path using the ArcGIS Pro of ESRI and store them in a geodatabase.
Getting Started with ArcGIS Indoor Maps
Complete you map test the paths and openings using navigation tool to make sure everything is perfect.
Export the existing geodatabase into IMDF format using the Generate Unit Openings tool.
Export Indoor Maps data to IMDF

Google Street and AR

I am looking forward to connect google street view with augmented reality application. I am looking forward for some development framework, using which i can connect google street view in an AR application. Basically i have to get some values from database which will have address or lat/long of restaurants and i have to identify these restaurants on google street view using augmented reality. How can it be done? Are there some frameworks in place? I have gone through String, Vuforia and metoia. I am not sure how can google street view be integrated with Vuforia. Looks like String and metoia in AR are not selling licenses any more.
Vuforia and similar SDKs are basically marker-based AR libraries - they are used for detecting images that are known in advance and may be recognized by the SDK.
From your description, you seem to need a geo-location based AR. Although I'm not sure this is exactly what you want, I suggest you take a look at mixare (which is also open source): mixare

AR ODG application for conference calls

I'm researching AR frameworks in order to select the best option for developing conference call/ meeting application for ODG glasses.
I got only a few directions for selecting a framework:
Performance of video streaming (capturing and encoding) must be watched closely to avoid overheating and excessive power consumption,
Should support extended tracking and
Video capturing should not be frame by frame.
I have no experience with AR field in general, and I would really appreciate if you can let me know your opinion or to give me some guidance on how to choose the best-fitted framework.
For ODG, you should use Vuforia according software details :
Qualcomm Technologies Inc.'s VuforiaTM SDK for Digital Eyewear
Vuforia supports extended tracking. According to what you are asking, you'll need more than just an AR SDK. You'll need to identify what you want exactly. Do you want an application that let the user see with who he's talking or do you want some holographic stuff? Depending on what you want, maybe smartglasses isn't what you need and at this point you should try to learn more about the differents SDK out there. I suggest you to look at this and that.

Techniques for offline reverse geocoding on a mobile device?

I am working on a mobile mapping application (currently iOS, eventually Android) - and I am struggling with how to best support reverse geocoding from lat/long to Country/State without using an online service.
Apple's reverse geocoding API depends on Google as the backend, and works great while connected. I could achieve similar functionality using the Open Street Maps project too, or any number of other web services.
What I really want however is to create a C library that I can call even when offline from within my application, passing in the GPS coordinates, and having it return the country and/or state at those coordinates. I do not need finer granularity than state-level, so the dataset is not huge.
I've seen examples of how to do this on a server, but never anything appropriate for a mobile device.
I've heard Spatialite might be a solution, but I am not sure how to get it working on iOS, and I wonder if it may be overkill for the problem.
What are some recommended techniques to accomplish this?
Radven
You will need to get the Shapefiles (lat/lng outline) of all the administrative entities (US states, countries, etc). There are a lot of public domain sources for these. For example, the NOAA has shapefiles for US states and territories you can download:
http://www.nws.noaa.gov/geodata/catalog/national/html/us_state.htm
Once you got the shapefiles, you can use a shapefile reader to test if a lat/lng is within a shape. There are open source readers in C, just google. I seen stuff at sourceforge for shapefiles, but have not used these myself.
The Team at OpenGeoCode.Org
If you're looking for an approach based on a quadtree, try Yggdrasil. It generates a quadtree based on country polygon data. A Ruby example script can be found here.
I can suggest good written offline geocoding 3rd party library.
https://github.com/Alterplay/APOfflineReverseGeocoding

Resources