AR Google Tango Project - augmented-reality

I'd like to know how to create a target for architectural large scale AR on a real site.In other words, I need that Google superimposed my 3d model on a specific place.
I have tried Google tango Area Learning tutorials (https://developers.google.com/tango/apis/unity/unity-codelab-area-learning), but after showing the message WALK AROUND TO RELOCALIZE the tablet does nothing, although I walk around to detect the real space, then after few minutes the message Unity project has stopped appears on the Google Tango tablet screen.
Could ADF file used instead of relocalizing the environment?
I've detected some interior scenes by Tango explorer and saved them,but I'm not able to use them for environment recognition purpose
I work on Unity and Google Tango tablet.
Thank you in advance for your response.

For anyone else facing this problem - the likely cause is not having a recent ADF file already on the device.
You need to first create a Area Description file (ADF) by scanning, and then you can separately Localise to that ADF - so you cannot "use an ADF instead of relocalising."
The tutorial you link above needs you to have separately created an ADF for your location - it simply chooses the most recent one you have.
You can use the Area Learning example to create your ADFs, and try localising to them. It also shows superimposing 3D models.
Also, look at the augmented reality one to see how to have objects load already in a specific place.

Related

A-Frame: FOSS Options for widely supported, markerless AR?

A-Frame's immersive-ar functionality will work on some Android devices I've tested with, but I haven't had success with iOS.
It is possible to use an A-Frame scene for markerless AR on iOS using a commercial external library. Example: this demo from Zapworks using their A-Frame SDK. https://zappar-xr.github.io/aframe-example-instant-tracking-3d-model/
The tracking seems to be no where near as good as A-Frame's hit test demo (https://github.com/stspanho/aframe-hit-test), but it does seem to work on virtually any device and browser I've tried, and it is good enough for the intended purpose.
I would be more than happy to fallback to lower quality AR mode in order to have AR at all in devices that don't support immersive-ar in browser. I have not been able to find an A-Frame compatible solution for using only free/open source components for doing this, only commercial products like Zapworks and 8th Wall.
Is there a free / open source plugin for A-Frame that allows a scene to be rendered with markerless AR across a very broad range of devices, similar to Zapworks?
I ended up rolling my own solution which wasn't complete, but good enough for the project. Strictly speaking, there's three problems to overcome with getting a markerless AR experience on mobile without relying on WebXR:
Webcam display
Orientation
Position
Webcam display is fairly trivial to implement in HTML5 without any libraries.
Orientation is already handled nicely by A-FRAME's "magic window" functionality, including on iOS.
Position was tricky and I wasn't able to solve it. I attempted to use the FULLTILT library's accelerometer functions, and even using the readings with gravity filtered out I wasn't able to get a high enough level of accuracy. (It happened that this particular project did not need it)

I want to build an AR tool to place and store text files in a virtual space

It's called a memory palace (Read: 'Moonwalking with Einstein') it's an ancient tool used to memorize, in my case coding concepts and Spanish and Indonesian phrases.
I'm learning python now, but I'm not really sure what direction to move in and what stack should be used to build a project like this. it wouldn't be too complex, I just want to store and save "text files" in a virtual space like my bedroom or on my favorite hikes.
If anyone has insights or suggestions it'd be much appreciated.
Probably the two most common AR frameworks, on mobile devices anyway, at the moment are ARKit for iOS devices and ARCore for Android devices.
I am sure you can find comparisons of the strengths and weaknesses of each one but it is likely your choice will be determined by the type of device you have.
In either case, it sounds like you want to have 'places' you can return to over time and see your stored content. For this you could build on some common techniques:
link the AR object to some sort of image in the real world and when this image is recognised by the AR app, launch your AR object, in your case a text file.
use 'Cloud Anchors' - these are essentially anchors for AR objects that can persist over time, when you close the app and come back to it later, and even be shared between users on different devices.
You can find more information on cloud Anchors at the link below, including information on using them with iOS and on Android:
https://developers.google.com/ar/develop/java/cloud-anchors/overview-android

Can ARCore recognise documents as well?

ARCore can determine images and perform some action when the image is recognized.
But if I want to recognize a formal document, can ARCore be useful. So there are two parts of question :
1.) Can ARcore recognize a document (say Telephone bill). ?
2.) Can we add some actions on different parts of the document. Like showing 'Pay Bill' button near bill amount or show a graph near data usage ?
Today no. ARCore was created focusing in Object 3D rendering in a real world.
What you can do is use another "tool" like OpenCV to read and recognised the "bill" and with ARCore you could set a anchor close to where you read the bill with a chart contain some information and maybe a button.
But even, hit a Object 3D in the air is not trivial.
Maybe you should try to understand why this would e useful?
Why not just read the bill and open informations in the app without ARCore.
ARCore has a heavy memory use, restricts devices and in this case the anchor/chart would be connect with the place, so if you move the bill (today) the chart would not follow.
Here you can have more information about the uses for ARCore

Augmented Reality, Move 3d model respective to device movement

I am working on augmented reality app. I have augmented a 3d model using open GL ES 2.0. Now, my problem is when I move device a 3d model should move according to device movement speed. Just like this app does : https://itunes.apple.com/us/app/augment/id506463171?l=en&ls=1&mt=8. I have used UIAccelerometer to achieve this. But, I am not able to do it.
Should I use UIAccelerometer to achieve it or any other framework?
It is complicated algorithm rather than just Accelerometer. You'd better use any third party frameworks, such as Vuforia, Metaio. That would save a lot of time.
Download and check a few samples apps. That is exactly what you want.
https://developer.vuforia.com/resources/sample-apps
You could use Unity3D to load your 3D model and export XCODE project. Or you could use open GL ES.
From your comment am I to understand that you want to have the model anchored at a real world location? If so, then the easiest way to do it is by giving your model a GPS location and reading the devices' GPS location. There is actually a lot of research going into the subject of positional tracking, but for now GPS is your best (and likely only) option without going into advanced positional tracking solutions.
Seeing as I can't add comments due to my account being too new. I'll also add a warning not to try to position the device using the accelerometer data. You'll get far too much error due to the double integration of acceleration to position (See Indoor Positioning System based on Gyroscope and Accelerometer).
I would definitely use Vuforia for this task.
Regarding your comment:
I am using Vuforia framework to augment 3d model in native iOS. It's okay. But, I want to
move 3d model when I move device. It is not provided in any sample code.
Well, it's not provided in any sample code, but that doesn't necessarily mean it's impossible or too difficult.
I would do it like this (working on Android, C++, but it must be very similar on iOS anyway):
locate your renderFrame function
simply do your translation before actual DrawElements:
QCARUtils::translatePoseMatrix(xMOV, yMOV, zMOV, &modelViewProjectionScaled.data[0]);
Where the data for the movement would be prepared by a function that reads them from the accelerometer as a time and acceleration...
What I actually find challenging is to find just the right calibration for a proper adjustment of the output from the sensor's API, which is a completely different and AR/Vuforia unrelated question. Here I guess you've got a huge advantage over Android devs regarding various devices...

How to store openstreetmap data locally on an iphone

I'm working on a project for college and I'm having great difficulty with part of it.
Simply put, I am looking to do the following 5 things:
download the open street map data for my city
store that data locally on the phone's harddrive.
view that data in my iOS application as a map
place markers on the map.
draw paths along roads between those paths.
I have been working on this particular part of the project for a number of weeks and I'm getting nowhere with it. I haven't even been able to figure out how to store the map on the phone let alone view the map data. I've tried using the "Route-Me" library but cannot get it working (although it seems to be one of the best libraries for using openstreetmap data so I am looking to learn how to use it). I feel pretty goddamned defeated.
If anyone has accomplished any of the tasks I am trying to do could you please link me to tutorials/guides/videos that you have used.
I'm not looking for people to give me code or do the work for me, I want to learn how to do this, but if anyone can point me in the right direction of sites that I could learn off I would be very grateful.
Any advice or feedback would be much appreciated
Here's how I ended up solving the problem.
Since Tilemill doesn't natively read .osm/.o5m/.pbf files I used Osmosis to convert a .osm file into .shp files.
I then created a new project in Tilemill and added the particular .shp files I wanted as layers to the new project. It takes a little bit of tinkering to get the map to look like you want it to but it's very similar to css and pretty easy to pick up as you go.
Once I had the map looking the way I wanted it I exported it as a .mbtiles file. This takes a long time to make and the files can be very large depending on how detailed the tiles are. I did one map of Ireland with zoom levels between 7-14 inclusive and I did one map of just Dublin city with zoom levels of 11-17 inclusive. Even though the map of just the city of Dublin had much less tiles, they were both ~200MB in size.
I then found this tutorial online which explains how to store the .mbtiles file in you application and how to read it: http://martinsikora.com/creating-mbtiles-db-for-ios-mapbox-from-hi-res-map-image
Here are a few other links that I found useful:
http://www.kindle-maps.com/blog/using-tilemill-with-openstreetmap-data.html
http://mapbox.com/developers/mbtiles/
http://mapbox.com/mapbox-ios-sdk/api/
http://mapbox.com/developers/api/#static_api
http://support.mapbox.com/discussions
I hope this is useful to someone
I would suggest trying the MapBox iOS SDK. It is actually forked from the Route-Me library and will allow you to accomplish everything on your list.
A key point to remember is that you have another step in between downloading the OSM data and storing it locally on the iOS device, that is, generating the map tiles and storing them in some sort of database.
Here is an example iOS app using the MapBox SDK that has both online and offline map sources and is a good place to start.

Resources