I've heard the term "augmented reality" used before, but what does it mean?
In particular, what is an augmented reality iPhone application?
From: http://en.wikipedia.org/wiki/Augmented_reality
Augmented reality (AR) is a term for a
live direct or indirect view of a
physical, real-world environment whose
elements are augmented by virtual
computer-generated sensory input, such
as sound or graphics. It is related to
a more general concept called mediated
reality, in which a view of reality is
modified (possibly even diminished
rather than augmented) by a computer.
As a result, the technology functions
by enhancing one’s current perception
of reality.
In the case of Augmented Reality, the
augmentation is conventionally in
real-time and in semantic context with
environmental elements, such as sports
scores on TV during a match. With the
help of advanced AR technology (e.g.
adding computer vision and object
recognition) the information about the
surrounding real world of the user
becomes interactive and digitally
usable. Artificial information about
the environment and the objects in it
can be stored and retrieved as an
information layer on top of the real
world view. The term augmented reality
is believed to have been coined in
1990 by Thomas Caudell, an employee of
Boeing at the time.
Incidentally, there are some images at the above URL that should make what's being discussed above fairly evident.
An augmented reality application is software that adds (augments) data or visuals to your experience on your camera.
Popular examples include snapchat filters, yelp monocle, and various map applications.
"Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are "augmented" by computer-generated or extracted real-world sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.1 Augmentation techniques are typically performed in real time and in semantic context with environmental elements, such as overlaying supplemental information like scores over a live video feed of a sporting event." source: wikipedia.org
Related
I am doing a study related to the field of augmented reality, and especially related to Google's ARCore technology. I would like to know if the SLAM method is required for model-based tracking. It seems obvious to me that it is not used in this case, but I could not find any article to confirm this.
My second question is similar to the first one and is related to the Azure Spatial Anchors technology. This technology has the ability to recognize a scene that has been visualized during a previous session. In a way, the Azure Spatial Anchors technology reminds me a little bit of the model based tracking technology, knowing that the model based tracking has the ability to recognize a 3D object that has been previously recorded. So, in the same way I was wondering if the use of the Azure Spatial Anchors technology requires the use of the slam method ?
Have a look at Frequently asked questions about Azure Spatial Anchors
Azure Spatial Anchors depends on mixed reality / augmented reality trackers. These trackers perceive the environment with cameras and track the device in 6-degrees-of-freedom (6DoF) as it moves through the space.
Given a 6DoF tracker as a building block, Azure Spatial Anchors allows you to designate certain points of interest in your real environment as "anchor" points. You might, for example, use an anchor to render content at a specific place in the real-world.
When you create an anchor, the client SDK captures environment information around that point and transmits it to the service. If another device looks for the anchor in that same space, similar data transmits to the service. That data is matched against the environment data previously stored. The position of the anchor relative to the device is then sent back for use in the application.
...
For each point in the sparse point cloud, we transmit and store a hash of the visual characteristics of that point. The hash is derived from, but does not contain, any pixel data.
There is disclosure in Microsoft Research Blog that the same type of visual simultaneous localization and mapping (SLAM) algorithms are being used with Azure Spatial Anchors: Azure Spatial Anchors: How it works
For further details on the algorithm under NDA you can Open a tech support ticket.
I'm having some difficulties understanding the concept of teleoperation in ROS so hoping someone can clear some things up.
I am trying to control a Baxter robot (in simulation) using a HTC Vive device. I have a node (publisher) which successfully extracts PoseStamped data (containing pose data in reference to the lighthouse base stations) from the controllers and publishes this on separate topics for right and left controllers.
So now I wish to create the subscribers which receive the pose data from controllers and converts it to a pose for the robot. What I'm confused about is the mapping... after reading documentation regarding Baxter and robotics transformation, I don't really understand how to map human poses to Baxter.
I know I need to use IK services which essentially calculate the co-ordinates required to achieve a pose (given the desired location of the end effector). But it isn't as simple as just plugging in the PoseStamped data from the node publishing controller data to the ik_service right?
Like a human and robot anatomy is quite different so I'm not sure if I'm missing a vital step in this.
Seeing other people's example codes of trying to do the same thing, I see that some people have created a 'base'/'human' pose which hard codes co-ordinates for the limbs to mimic a human. Is this essentially what I need?
Sorry if my question is quite broad but I've been having trouble finding an explanation that I understand... Any insight is very much appreciated!
You might find my former student's work on motion mapping using a kinect sensor with a pr2 informative. It shows two methods:
Direct joint angle mapping (eg if the human has the arm in a right angle then the robot should also have the arm in a right angle).
An IK method that controls the robot's end effector based on the human's hand position.
I know I need to use IK services which essentially calculate the
co-ordinates required to achieve a pose (given the desired location of
the end effector). But it isn't as simple as just plugging in the
PoseStamped data from the node publishing controller data to the
ik_service right?
Yes, indeed, this is a fairly involved process! In both cases, we took advantage of the kinects api to access the human's joint angle values and the position of the hand. You can read about how Microsoft research implemented the human skeleton tracking algorithm here:
https://www.microsoft.com/en-us/research/publication/real-time-human-pose-recognition-in-parts-from-a-single-depth-image/?from=http%3A%2F%2Fresearch.microsoft.com%2Fapps%2Fpubs%2F%3Fid%3D145347
I am not familiar with the Vive device. You should see if it offers a similar api for accessing skeleton tracking information since reverse engineering Microsoft's algorithm will be challenging.
My goal is to overlay material/texture on a physical object (it would be an architectural model) that I would have an identical 3d model of. The model would be static (on a table if that helps), but I obviously want to look at the object from any side. The footprint area of my physical models would tend to be no smaller than 15x15cm and could be as large as 2-3m^2, but I would be willing to change the size of the model to work with ARCore's capability.
I know ARCore is mainly designed to anchor digital objects to flat horizontal planes. My main question is, in its current state, is it capable of accompliahing my end goal? If i have this right, it would record physical point cloud data and attempt to match it to point cloud data of my digital model, then overlapping the two on the phone screen?
If that really isn't what ARCore is for, is there an alternative that I should be focusing on? In my head this sounded fairly straightforward, but I'm sure I'll get way out of my depth if I go about it an inefficient way. Speaking of depth, I would prefer not to use a depth sensor, since my target devices are phones.
I most definitely hope that it will be possible in the future - after all an AR toolkit without Computer Vision is not that helpful.
Unfortunately, according to the ARCore employee Ian, this is currently not directly supported but you could try to access the pixels via glReadPixels and then use OpenCV with these image bytes.
Quote from Ian:
I can't speak to future plans, but I agree that it's a desirable
capability. Unfortunately, my understanding is that current Android
platform limitations prevent providing a single buffer that can be
used as both a GPU texture and CPU-accessible image, so care must be
taken in providing that capability.
Updated: 25 September, 2022.
At the moment there's still no 3D Object Recognition API in ARCore 1.33.
But... You can use ML Kit framework and Augmented Images API (ARCore 1.2+) for some tasks.
According to Google documentation, you can use ARCore as input for Machine Learning models.
I want to use an already implemented SLAM algorithm for mapping my college campus.
I have found some algorithms on OpenSLAM.org and some other independent ones such as LSD-SLAM and Hector SLAM, which show some promises but they have limitation such as they use LIDAR, or don't extend to large dataset etc.
SLAM has been an active topic for many years and some groups have also mapped an entire town. Can someone point me to such efficient algorithm?
My requirements are:
It must use RGB camera/cameras.
Preferably produce (somewhat) dense map of area.
It should be able to map large area (I have seen some algo which can only map up to a desk table or room, but they usually lose track if there is jerk in camera motion (Observed in LSD SLAM) or take very few landmarks which is only useful for study purposes).
Preferably a ROS implementation.
I've been reading a lot about the many differences, pros and cons between NURBS and polys, but is there a difference when it comes to 3D printing?
The printed model is typically polygonized before printing - it's easier to do things like watertightness checks and so on using triangle meshes. A nurbs model can be polygonized at various resolutions, so it should be possible to get a higher quality, smoother looking print by starting with a NURBS model and using a very generous tesselation at printing time. The tesselation may not always produce a watertight mesh - depending on the software used to do the printing that might cause problems which need to be fixed up by hand.
So, overall the main advantage of a nurbs model in this context is that you can work with a more efficient, lightweight representation of the data up until its time to print: the final printed mesh may be impractically dense for most ordinary applications (millions of triangles).
To add to #theodox answer. The other reason is that CAD/CAE applications do not really like polygon models, and treat them as a second class citizen at best. So if you need to do some analysis on the model and do some extra operations or send it to a engineer the NURBS model is MUCH better. For the engineer it allows to optimize production paths so if they are using high end printers or CNC machines instead it allows them to do a much better job. If you do not use a NURBS model the engineer will just most likely reverse engineer your model and throw your data away.
Maya on the other hand is not a very conductive engineering application. But as a upside you can just use subdivision surfaces and get both a NURBS model and benefits of polygon modeling.
PS: For a engineering application making the model watertight is no problem whatsoever if your gaps are not too big.
Depends what you mean by polys. Most of the time, what people mean is you model a poly and then smooth it (by hitting '3' or turning it into a subd).
If you're doing that, nurbs have absolutely no advantages over subds for 3d printing in terms of smoothness.
NURBS surfaces created with Class A technical surfacing may be considered "airtight" surface meshes. B-spline mathematical surfaces include physical science dynamic compression/tension surface characteristics as "structurally loaded" systems model architectures. G-Code file formats now apply b-spline data in vector based tool path manufacturing. Raster based polygon smooth function is contrary to accurate modeling for functional engineered prototypes of zero tolerance accuracy. Smooth function provides unpredictable mesh as is undefinable approximation. Professional 3D Print solutions employ NURBS geometry G-Code directly and do NOT create polygon tessellated mesh as seen in common STL file formats.The future of 3D modeling and additive manufacturing is clearly vector based b-spline NURBS surface product architecture.