How do I append multiple area learning sessions onto the same ADF to capture visual descriptions of the environment from every position and angle?
Currently, there's no multiple ADF stitching pipeline available. However, you can enable learning mode along with loading an ADF. This will append the new "learned" area to the old ADF.
Also, please note that the learning only happens after the device is relocalized in the loaded ADF.
Related
I am building a range-only localisation application that I would like to use an MRPT particle filter for.
I am looking at the MRPT example applications:
ro-localization
pf-localization
rbpf-slam
All of these run fine on the sample data, but the resulting pose is returned in 2D only.
The pf-localization sample has a use_3D_poses = true setting, but this just adds a 0 return for the Z axis.
I have tried adding Z values to the beacons, but the pose Z is always zero.
How can I use the MRPT particle filter in 3D space, instead of 2D?
I am only interested in the XYZ position values of the mobile node.
How can I use my own live data? I have:
Anchor positions
Distance from Anchor to Mobile node.
I am using MRPT on windows, built from source.
The different RO-SLAM and RO-localization possibilities are now better described in this page of MRPT docs.
On using pf-localization with your own data, you could directly use a custom program making use of mrpt::slam::CMonteCarloLocalization3D.
Alternatively, you could stay tuned to the latest version of mrpt (git branch develop) where we'll very soon port the existing pf-localization CLI app into a C++ class in mrpt::apps. The advantage of the latter is to reuse all the log file writing, 3D scenes grabbing and online visualization, etc.
But nothing prevents you to jump straight ahead and build your own app based on pf-localization as a starting point.
(Disclaimer: I'm the lead developer of MRPT)
I am trying to make a photo editing application for iOS, but am not sure where to start looking. I have attached an image made in Word... that hopefully simply depicts what I am trying to achieve. It will involved manipulating individual pixels of a shape/image and masking/clipping. WHow should I start and what resources are available to me other than the developer docs?
Cheers
If you are not new to programming I would suggest a trial and run kind of approach. If it was me, I would follow a approach like this
Figuring out what to do/ what not to do
Do I need to develop the tech I want from scratch or can I use some pods ?
What are the good reads and example apps - (Try this)
Development approach
Build a photo gallery to pick images from
Build a EDIT mode screen
Get set of template overlay images
Figure out how to overlay them on top of each other
Export the final picture as one picture
The developer documentation is essential when it comes to learning new APIs, but sometimes it can be a little overwhelming. You can try reading raywenderlich.com tutorials on Core Image first to get an idea (link here) or find a book on computer graphics. It is essential to understand at least the underlying techniques to efficiently program image processing code. In many cases you'll find there is a more elegant technique than just looping on pixels and modifying one-by-one.
Then you can continue with reading on image compositing using core image for example.
Augment reality doesn't work if Auto-Connect is disabled in Tango Manager and if auto-Connect is enabled it doesn't allow any ADF to load.
So how can we use pose data of an ADF to make AR objects that are persistent and appear with respect to the ADF origin?
In order to use ADF files you'll have to connect manually like described here: https://developers.google.com/tango/apis/unity/unity-user-permissions
Auto-connect and Area Description Files can't work together for obvious reasons as auto-connect is done on startup and an ADF would first have to be chosen.
This example uses AR+ADF together, it is a good reference for you too:
https://github.com/googlesamples/tango-examples-unity/tree/master/UnityExamples/Assets/TangoSDK/Examples/AreaLearning
I'd like to know how to create a target for architectural large scale AR on a real site.In other words, I need that Google superimposed my 3d model on a specific place.
I have tried Google tango Area Learning tutorials (https://developers.google.com/tango/apis/unity/unity-codelab-area-learning), but after showing the message WALK AROUND TO RELOCALIZE the tablet does nothing, although I walk around to detect the real space, then after few minutes the message Unity project has stopped appears on the Google Tango tablet screen.
Could ADF file used instead of relocalizing the environment?
I've detected some interior scenes by Tango explorer and saved them,but I'm not able to use them for environment recognition purpose
I work on Unity and Google Tango tablet.
Thank you in advance for your response.
For anyone else facing this problem - the likely cause is not having a recent ADF file already on the device.
You need to first create a Area Description file (ADF) by scanning, and then you can separately Localise to that ADF - so you cannot "use an ADF instead of relocalising."
The tutorial you link above needs you to have separately created an ADF for your location - it simply chooses the most recent one you have.
You can use the Area Learning example to create your ADFs, and try localising to them. It also shows superimposing 3D models.
Also, look at the augmented reality one to see how to have objects load already in a specific place.
I was searching on Google and StackOverflow to see if anyone have solution for my problem, but didn't found anyone with same problems.
So, currently I'm running Debian machine with Mapserver installed on it. The server also run webserver for displaying map data over the browser. The generation of map is dynamic, based on layers definition in database I built mapfile in PHP and based on that generated PHP the map is shown to user. The data is defined in database and as a SHP files (both combined in single mapfile).
It is fully dynamic, what I mean with that is that user can enable/disable any of layers or click inside polygon (select some points on map) it color the selection (generate new mapfile based on selection and re-generate tiles).
So the execution of all that code from selecting some area to coloring selected items somtimes take too much time for good user experience.
For solution I'd like to use some kind of temporary tiles cache, that can be used for single user, and to be able to delete it's content when user select some items on map or enable/disable one of the layers.
P.S. I already did all the optimizations provided from Mapserver documentation.
Thanks for any help.
It sounds to me like your problem is not going to be helped by server-side caching. If all of the tiles depend on user selections, then you're going to be generating a bunch of new tiles every time there's an interaction.
I've been using MapCache to solve a similar problem, where I am rendering a tileset in response to a user query. But I've broken up my tiles into multiple logical layers, and I do the compositing on the browser side. This lets me cache, server-side, the tiles for various queries, and sped up performance immensely. I did seed the cache down to zoom level 12, and I needed to use the BerkeleyDB cache type to keep from running out of inodes.
I'm using Leaflet.js for the browser-side rendering, but you should also consider OpenLayers.
After looking at the source code, I have some other ideas.
It looks like you're drawing each layer the same way each time. Is that right? That is, the style and predicate of a particular layer never change. Each user sees the image for that layer the same way, if they have selected the layer. But the combination of layers you show does change, based on OpenLayers control? If that's the case, you don't need per-user caching on the server. Instead, use per-layer caching, and let the user's browser figure out the client side caching.
A quick technique for finding slow layers is to turn them all of. Then reenable them one by one to find the culprit. Invoke Mapserver from the command line, and time the runs, for greater precision than you'll get by running it from your webserver.
You mentioned you're serving the images in Google 3857 while the layers are in Gauss-Kruger/EPSG 3912. Reprojecting this on the fly is expensive. Reprojecting the rasters on the fly is very expensive. If you can, you should reproject them ahead of time, and store them in 3857 (add an additional geometry column).
I don't know what a DOF file is--maybe Digital Obstacle File? Perhaps preload the DOF file into PostGIS too? That would eliminate the two pieces you think are problematic.
Take a look at the SQL queries that PostGIS is performing, and make sure those are using indexes
In any case, these individual layers should go into MapCache, in my opinion. Here is a video of a September 2014 talk by the MapCache project leader.