Lab: Forum about AR (Spark AR, Lens Studio, 8thWall, Reality Composer, etc...) - augmented-reality

Here's the Lab: https://lab.popul-ar.com/
It's a forum focusing specially on Augmented Reality with creators active in Spark AR, Lens Studio, 8thWall, Reality composer etc...

Related

MediaPipe vs MLKit Vision vs ARCore

There seems to be a lot of overlap between these 3 Google libraries.
According to their sites:
MediaPipe: MediaPipe offers cross-platform, customizable ML solutions for live and streaming media.
ARCore: With ARCore, build new augmented reality experiences that seamlessly blend the digital and physical worlds.
MLKit Vision: Video and image analysis APIs to label images and detect barcodes, text, faces, and objects.
Could someone with experience working with these explain how they relate to eachother and what are the use cases for each?
For example which would be appropriate to implement high level, popular features such as face filters?
(Also perhaps some insight on which of the 3 is most likely to land in Google Graveyard the fastest)
Some simplified & informal explanations:
MediaPipe is a powerful but lower-level library for live and streaming ML solutions, which requires non-trivial setup and customization before it works for your use case.
ML Kit is an end-to-end solution provider, offering mobile friendly, easy-to-use APIs and pre-built pipelines under the hood. Several ML Kit features are actually powered by MediaPipe internally (i.e. Pose detection and Selfie-segmentation).
There is no direct relationships between ARCore and ML Kit, but there could be shared or similar ML models in between, because both require ML models to power their features but the two products have different focuses.

API availability to track other objects apart from human gesture for Windows Kinect

APIs shipped with MS Windows Kinect SDK is all about program around Voice, Movement and Gesture Recognition related to humans.
Is there any open source or commercial APIs for tracking & recognizing dynamically moving objects like vehicles for its classification.
Is it feasible and good approach of employee Kinect for Automated vehicle classification than traditional image processing approaches
Even image processing technologies have made remarkable innovations, why fully automated vehicle classification is not used at Most of the toll collection.
why existing technologies (except RFID approach) failing to classify the vehicle (i.e, they are not yet 100% accurate in classifying) or is there any other reasons apart from image processing.
You will need to use a regular image processing suite to track objects that are not supported by the Kinect API. A few being:
OpenCV
Emgu CV (OpenCV in .NET)
ImageMagick
There is no library that directly supports the depth capabilities of the Kinect, to my knowledge. As a result, using the Kinect over a regular camera would be of no benefit.

An API to do game AI research in shooters

I'm looking to do a machine learning related course project. I'm basically looking for a framework for a top view 2d shooter game, and apply machine learning algorithms to it.
There is a framework available to do research in car racing, called TORCS, and I was looking something similar to this, but for shooters.
Basically I would like a high level API to make the bot move, shoot, pick weapons etc.
Some of the work that could be done:
Lets say you need to model how your bot will fight during combat. You use a neural network to map enemy position, bot's position, bot's ammo, etc to how you should move, and what weapon you should choose.
Is there any (preferable 2D, Python) framework which will help me to do this?
Robocode in Java or .Net.
Marvin's Arena in C#, VB, C++
Brood Wars API, for Starcraft.
Pogamut 3 GameBots2004 for Unreal Tournament and so.
Planet Wars / Galcon Clone AI. 2D but no a shooter.

What is an augmented reality mobile application?

I've heard the term "augmented reality" used before, but what does it mean?
In particular, what is an augmented reality iPhone application?
From: http://en.wikipedia.org/wiki/Augmented_reality
Augmented reality (AR) is a term for a
live direct or indirect view of a
physical, real-world environment whose
elements are augmented by virtual
computer-generated sensory input, such
as sound or graphics. It is related to
a more general concept called mediated
reality, in which a view of reality is
modified (possibly even diminished
rather than augmented) by a computer.
As a result, the technology functions
by enhancing one’s current perception
of reality.
In the case of Augmented Reality, the
augmentation is conventionally in
real-time and in semantic context with
environmental elements, such as sports
scores on TV during a match. With the
help of advanced AR technology (e.g.
adding computer vision and object
recognition) the information about the
surrounding real world of the user
becomes interactive and digitally
usable. Artificial information about
the environment and the objects in it
can be stored and retrieved as an
information layer on top of the real
world view. The term augmented reality
is believed to have been coined in
1990 by Thomas Caudell, an employee of
Boeing at the time.
Incidentally, there are some images at the above URL that should make what's being discussed above fairly evident.
An augmented reality application is software that adds (augments) data or visuals to your experience on your camera.
Popular examples include snapchat filters, yelp monocle, and various map applications.
"Augmented reality (AR) is a live direct or indirect view of a physical, real-world environment whose elements are "augmented" by computer-generated or extracted real-world sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called computer-mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. Augmented reality enhances one’s current perception of reality, whereas in contrast, virtual reality replaces the real world with a simulated one.1 Augmentation techniques are typically performed in real time and in semantic context with environmental elements, such as overlaying supplemental information like scores over a live video feed of a sporting event." source: wikipedia.org

Microsoft Robotics Studio, simple simulation

I am soon to start with Microsoft Robotics Studio.
My question is to all the gurus of MSRS, Can simple simulation (as obstacle avoidance and wall following) be done without any hardware ?
Does MSRS have 3-dimensional as well as 2-dimensional rendering? As of now I do not have any hardware and I am only interested in simulation, when I have the robot hardware I may try to interface it!
Sorry for a silly question, I am a MSRS noob, but have previous robotics h/w and s/w experience.
Other than MSRS and Player Project (Player/Stage/Gazebo) is there any other Software to simulate robots, effectively ?
MSRS tackles several key areas. One of them is simulation. The 3D engine is based on the AGeia Physics engine and can simulate not only your robot and its sensors, but a somewhat complex environment.
The demo I saw had a Pioneer with a SICK lidar running around a cluttered appartment living room, with tables, chairs and etc.
The idea is that your code doesn't even need to know if it's running on the simulator or the real robot.
Edit:
A few links as requested:
Start here: http://msdn.microsoft.com/en-us/library/dd939184.aspx
alt text http://i.msdn.microsoft.com/Dd939184.image001(en-us,MSDN.10).jpg
Then go here: http://msdn.microsoft.com/en-us/library/dd939190.aspx
alt text http://i.msdn.microsoft.com/Dd939190.image008(en-us,MSDN.10).jpg
Then take a look at some more samples: http://msdn.microsoft.com/en-us/library/cc998497.aspx
alt text http://i.msdn.microsoft.com/Cc998496.Sumo1(en-us,MSDN.10).jpg
simple answer is yes, MRDS simulator and player/stage have very similar capabilities. MRDS uses a video game quality physics engine under the hood, so you can do collisions, and some basic physics on your robots, but its not going to be the level of accuracy of a matlab simulation (on the flip side its realtime and easier to develop with though). You can do a lot in MRDS without any hardware.
MRDS uses some pretty advanced programming abstractions, so can be a bit intimidating at first, but do the tutorials, and the course that has been posted to codeplex "software engineering for robotics" and you will be fine. http://swrobotics.codeplex.com/

Resources