How can I track many same ImageTargets in Unity / Vuforia - augmented-reality

when I put ImageTarget Prefab into Hierarchy view, ARCamera only track 1 marker that is child of ImageTarget instead of all markers it see, so can you tell me how to track all the markers, for example in the picture below, I put 2 same markers next to each other, but the only one can be render (instead of two at the the same time). Thanks a lot
http://i.stack.imgur.com/GxAmQ.png

Related

How to have multiple cameras in a manim 3D Scene

I'm looking for a setup in manim (0.8.0 CE) where I have in the main part of the screen an ambiente camera plus two addtional views on the right hand side (like a picture in picture, or split screen) from a fixed diffrent angle on the scene. I've found a MultiCamera Class, but this is fixed to a MovingCamera. I assume that I need additional three_d_camera objects, but couldn't find any example in any of the tutorials which points me in the right direction. Are there any ideas how to do this?

How to replace SCNNode with another on the same place with points on marker?

My AR example app shows heart 3D model in touch. When retouching on the 3D model I want to show parts marker on the heart. For that, I created 2 models one for showing plain another one for showing heart with parts arrow marker.
But I don't know how to place a marker on the exact place where I placed the previous node. A node will be changed only on tap, scale, transformation, position everything has to be same.
Can you tell me how to do it?
Create a empty node on tap and add your heart model as its child node. Now on next tap remove heart node and add marker node as child node.

Select AR placed objects with a dot in the middle of the screen in Unity

I have seen numerous AR application behaving like this: there is a dot in the middle of the screen and we can position that dot on some objects and some content is displayed (I attacked an image if i was not clear enough). My question is how is this kind of behaviour obtained in Unity, my guess is that from that point you cast a ray, but I don't think that AR placed objects, from an ADF for example, can be found with the hit from that ray. The dot selecting objects placed on AR
I have made it work, with the aid of the Google Tango's Area Learning demo scene. I've placed some objects in the area and I have started to send a Raycast from the middle of the camera with "ViewportPointToRay" method. When that Ray would collide with a GameObject you can implement whatever functionality you need.

Advice for library with GeoSpatial Mapping that allows users to place moving objects on a 2D map

I'm looking for a library/framework/toolkit that will allow me to render a 2D map from real GeoSpatial data and draw objects on the 2D map.
Requirements:
Map Tiling (when I zoom into the map, i want a more detailed image)
Pan (ability to use the mouse to move around the map)
Read various Geospatial images (satellite, street, etc)
Ability to draw objects onto the map (based on lat/longs) and have them move. For example, I want to be able to put an image of a bird on the map and have it move and rotate correctly.
Primitive shapes. It would be nice if it had built in ability to draw lines, circles, etc.
Complex drawing. For example, I want to draw a compass and have it show the heading of the current heading of the bird.
Mouse input. I want to be able to right click on the map and have a context menu appear. I want to click and hold an shape I draw on the map and drag it easily.
What I have looked at:
OpenSceneGraph with osgEarth. It's great, and fulfills my reqs, but is really slow and I had to do a lot of weird things to get things to work (especially with dragging objects on the map).
Cesium: looks promising, but somewhat slow, and I need it to work as a desktop application. I've seen online that some have managed to use Cesium inside Qt's Webkit, but I'm not sure I would want to take that risk.
EDIT:
I really want to stay away from a web-based framework if possible.
http://imgur.com/52DaJtQ
Here is a primitive picture of what I'm want to achieve. The aircraft icon should move and the degree circle along with it. I want to be able to drag the green waypoints and have the lines redraw as I move a waypoint. The red sensor footprint should adjust to what the aircraft can see.
http://imgur.com/52DaJtQ
Google Maps, Open Street Map, Bing Maps.
I use OpenSceneGraph/osgEarth extensively and am not dissatisfied with its performance.
What kind of weird things did you need to do?
If you want, you can contact me privately to troubleshoot your situation. Me website is AlphaPixel.com and there's a contact form there.

Google Fusion Tables Polygons to Points

I have a Google Fusion Table that contains some very small polygons over a very large area. I'd like to create an event that switches from polygons to points when the user zooms to a certain level. Currently, the points are only generated at the maximum-most zoom (the entire world). In this example the polygons turn to points when you zoom out by just one level and I'd like to do something similar. Any advise would be greatly appreciated.
The rendering of polygons as points is not a selectable feature, it usually will happen when there are too many features(or when a feature may not be rendered properly).
What you can do: create another-geometry-column, where you store the desired points(e.g. the center of the polygon), then you'll be able to choose which column should be used to render the geometry.

Resources