Import .kml map points to ARCore - arcore

Before I go down the rabbit hole of learning a new language, I'd like to know if what I want is possible.
I've just seen the Google maps augmented reality ARCore working and have a perfect project in mind. I have an existing .kml file containing map points all over Paris.
I'd like to import them into an ARcore project/android app so I can turn it on and it tells me the nearest map point overlayed on the camera, with directions from my current location.
My (very limited) research tells me most of this is possible, but I can't find any info about importing map points from kml files. Would I have to import every point manually?

Related

How can you generate real life terrain in Roblox?

Goal
I am making a flight game on Roblox that requires real-world map data to generate terrain.
Problems
I have absolutely no idea how to make this kind of program, and I have been unable to find any terrain generators that meet my requirements and have only found one terrain generator for Roblox.
Requests
The terrain needs to be generated fast enough so that commercial planes, which travel at speeds of around 500 knots, will not fly out of generated terrain. Also, accurate airports need to be generated with taxiways and runways, as well as the airport building. In addition, I also need the taxiway and runway location data, as well as the location of taxiway markings so that planes can pathfind along taxiways and runways, as well as do an ILS approach. Finally, data that is used for terrain should be acquired live so that I don't have to create an enormous map and use up too much storage.
THIS POST NO LONGER NEEDS ANSWERS
I have started working on a program to accomplish this. If finished, the project will be linked here.
It's not going to be high-quality terrain, but you can download map data from openstreetmap.org and create a mesh for the ground. Then use the building information to display buildings as basic shapes. Airports should also be easy to extract. I suggest creating one mesh per chunk, then stream the chunks required to the client, assuming that this works properly in Roblox. I'm not sure how detailed you want the meshes to be, but especially with two or more levels of detail, it should be no problem for the server.

Displaying Google Tango scans with Hololens

I am currently working with Google Tango and Microsoft Hololens. I got the idea of scanning a room or an object using google Tango and then converting and showing it as hologram with the Hololens.
For that I need to get the ADF file on my computer.
Does someone know of a way to import adf-files onto a computer?
Do you know if it is possible to convert adf-files into usable 3d files?
An ADF is not a 3D scan of the room, it's a collection of feature descriptors from the computer vision algorithms with associated positional data, but the format is not documented.
You will want to use the point cloud from the depth sensor, convert it to a mesh (there are existing apps to do this) and import the mesh into a render engine on Hololens.

how to save laser-scan 3D point cloud as the format of ADF(google tango Area Description File)

the 3D point is generated by my laser-scaner, I want to save it as the format ADF,so Google Tango could use it
The short answer is... you probably can't.
There is no information on the ADF format but in any case it uses more than the 3D points from the depth camera. If you watch the Google IO videos it shows how it uses the angular camera to obtain some image features and recognize the environment. I guess using only 3D data would be too expensive and could not use information from distant points.

Using Augmented Reality libraries for Academic Project

I'm planning on doing my Final Year Project of my degree on Augmented Reality. It will be using markers and there will also be interaction between virtual objects. (sort of a simulation).
Do you recommend using libraries like ARToolkit, NyARToolkit, osgART for such project since they come with all the functions for tracking, detection, calibration etc? Will there be much work left from the programmers point of view?
What do you think if I use OpenCV and do the marker detection, recognition, calibration and other steps from scratch? Will that be too hard to handle?
I don't know how familiar you are with image or video processing, but writing a tracker from scratch will be very time-consuming if want it to return reliable results. The effort also depends on which kind of markers you plan to use. Artoolkit e.g. compares the marker's content detected from the video stream to images you earlier defined as markers. Hence it tries to match images and returns a value of probability that a certain part of the video stream is a predefined marker. Depending on the threshold you are going to use and the lighting situation, markers are not always recognized correctly. Then there are other markers like datamatrix, qrcode, framemarkers (used by QCAR) that encode an id optically. So there is no image matching required, all necessary data can be retrieved from the video stream. Then there are more complex approaches like natural feature tracking, where you can use predefined images, given that they offer enough contrast and points of interest so they can be recognized later by the tracker.
So if you are more interested in the actual application or interaction than in understanding how trackers work, you should base your work on an existing library.
I suggest you to use OpenCV, you will find high quality algorithms and it is fast. They are continuously developing new methods so soon it will be possible to run it real-time in mobiles.
You can start with this tutorial here.
Mastering OpenCV with Practical Computer Vision Projects
I did the exact same thing and found Chapter 2 of this book immensely helpful. They provide source code for the marker tracking project and I've written a framemarker generator tool. There is still quite a lot to figure out in terms of OpenGL, camera calibration, projection matrices, markers and extending it, but it is a great foundation for the marker tracking portion.

Adding vector map data to iOS GPS app. Real Time vector graphics rendering

We are working on a project to add vector map data from OSM and NAVTEQ to a iOS GPS app.
Currently, the app displays raster map images and provides moving map navigation features. We now want to take it a step further by integration vector maps but don't know where to start.
Guidance from developers with experience on GPS navigation would be great.
Here is the brief on the requirements:
Target Devices:
iOS. C++ is preferred for the core for future compatibility with other platforms.
Data integration and packaging:
Map data source:
- NAVTEQ
- OpenStreetMap
File format:
- Ideal for mobile devices with considerations of device limitations.
- Either find an already established format, or create one in house.
Compiling:
- Determine a format for source data (Shp, MapInfo etc)
- Compile source format to required format.
Map rendering engine:
Display of maps:
- Vector map view will be separate to the current raster map view.
- Render data into lines, points, polygons etc in real time. Tiled or pre-rendered is not acceptable.
- 2D birdseye view. (3D is planned for future versions).
- Shade relief to illustrate elevation.
- Display user generated data such as routes, tracklogs, waypoints.
- A scale, e.g. 500 metres.
- Speedy performance is essential to provide better user experience.
- Good examples would be the Tom Tom iOS app.
Map Interactions:
- Pan, Zoom, rotate.
- Make use of multitouch functionality.
Search
- Address, locations, POI (Geo Coding)
- Address from location (Reverse Geo Coding)
Style sheets
- Easily customise the look of the map been displayed.
- Every element can be cusomised.
We would like to find out where to start our research. What libraries and SDKs are out there that are worth spending the time investigating?
Some notes based on my experience:
Source data format: you'll probably want to be able to import data from ESRI shapefiles and OpenStreetMap (which comes as XML or a more compact but equivalent binary format). NAVTEQ data can be obtained as ESRI shapefiles. Shaded relief can be obtained by processing USGS height data (http://dds.cr.usgs.gov/srtm/).
2D versus 3D: the step from one to the other is a big one. 2D data is almost invariably provided as latitude and longitude and projected to a plane: Google Maps and OpenStreetMap use a very simple but much derided spherical Mercator projection. Moving to 3D requires a decision on the coordinate system - projected plane plus height versus true 3D based on the shape of the earth - and possibly problems involving level of detail. A good way to proceed might be to draw the shape of the earth (hills and valleys) as a triangle mesh, then drape the rest of the map on it as a texture. You might want to consider "two and a half D" - using a perspective transformation to display the map as if viewing it from a height.
Libraries: there's quite a big list of map rendering libraries here, both commercial and non-commercial (disclosure: mine is one of them). Many of these libraries have style sheet systems for customising the map look and feel.
A very good open-source rendering library (not mine) is Mapnik, but I am not sure whether that will port very easily on to iOS. However, it's a very good idea to read up on how Mapnik and other rendering libraries do their work, to get a feel for the problem. The OpenStreetMap wiki is a good portal for learning more about the field.
Text rendering on maps is nearly always done using FreeType, an open-source rasterizer library with an unrestrictive license.
Try out MapBox library: http://mapbox.com/
There is a list on the OSM Wiki but it is sadly not complete.
Two vector libraries that I know of are CartoType (which you can see in use in the newer Lonely Planet Guides) and Skobbler - Skobbler don't have an off the shelf product but I bellieve they will integrate their vector maps and routing for you.
There is also a related question on the OSM StackExchange

Resources