Face Tracking and Virtual Reality - augmented-reality

I'm searching for a face tracking system to use in an augmented reality project. I'm trying to find an open source and multi-platform application for it. The goal is to return the direction where the face is looking to interact with the virtual environment, (something like this video).
I've downloaded the sources of the above Johnny Lee's application and tried to use Free Track too, making my own headset (some kind of monster, hehe). But it's not good to be limited to infrared points in your head.
These days I've download FaceTrackNoIR, but when I launch the program I get "No DLL was found in the Waterfall procedure." that I'm actually trying to solve.
Anyone knows a good application, library, code, lecture, anything that could help me to find a good path for this?
Thank you all!
I'll try to post results someday :-)

I would take a look at OpenCV. It is a general purpose machine-learning and computer vision C++ library. One of the examples in the download is a real-time face tracker that connects to a video camera connected to your computer and draws squares around any faces in the camera view.

Related

How to detect an image in a news paper and play a video relevant to it using augmented reality?

I have planned to detect an image in a news paper play the video relevant to it. I have seen several news paper reading AR apps include this feature. But i couldn't find how to do so. How can I do it??
I dont expect any code. But like to know what are the steps I should follow to do this. Thank you.
You need to browse through the available marker-based AR SDKs - such SDKs let you defined in advance the database of images you would like to detect and respond to, and once any of these images is detected during runtime, you get some kind of an event with data on the detected image.
Vuforia is considered a good one and it has good samples, so it is supposed to be easier to start with. You should also check out Kudan, and there are more.

Can augmented reality be realized in a website?

Nowadays, I wanna do some research of augmented reality technology.Especially, I would like to match a 2d image and a 3d model.And then, I will see the 3d model if scanning the 2d image. What's more, I know that there are a lot of SDKs(like metaio,and wikitude) and software can realize this in mobile app. However, what I want to do is realizing this in a website. I hope the people who use this don't need to download a particular mobile app, but just open a website and then scan a picture.
So, until now, I's like to know that,as the tile asked, can AR be realized in a website? If yes, how can I do it or is there any software like Metaio Creator to do this? If no, why?
Thank you for anyone who would like to answer my naive question.
May I recommend you our completely webbased AR & VR tool holobuilder.com by bitstars.com?
It supports 360 degree photospheres that can be enhanced with custom 3D models and then directly be embedded into your website as iframe, it has native support for stereoscopic view mode and much more.
For your use case you could have a look at the lower part of this blog post where you find information and an embedded example presentation with photosphere imagery containing 3D elements:
http://heyholo.com/google-pushes-vr-great-for-tools-like-holobuilder/
If you want to start creating I recommend the beginners guide:
https://medium.com/#maxspeicher/the-definite-guide-to-holobuilder-3b62a54d303e
The cv feature tracking you requested can not yet be realized without any apps/browser. But what you can do is realizing perspectively correct displaying 3D elements into the camera image and move with sensors. Should be as performant as within the player app.
We hope that it can somehow help you in pushing your research and we would love to read your feedback. In case of any questions please do not hesitate to ask, here or on any other contact channel!

Augmented Reality Mask using Facial Recognition on Xbox Kinect with Kinect for Windows SDK

I am using an XBox Kinect with the Kinect for Windows SDK. I want to make an application that will augment a 3D mask (a 3D model of a mask made in 3DS Max) onto the face of anyone using the application. The application will be used in an exhibit locally. I have not tried much because I don't know where to start. So what I want to know is, is it currently possible to augment a 3DS Max model onto a live video stream using the facial recognition and skeletal tracking features in the newest Kinect for Windows SDK, and if so, how/where should I start trying to do/implement this? Any point in the right direction would be great. Thank you! PS And yes, I have read the UI guidelines and the facial documentation. My problem is one of not knowing where to start programming, not one of not understanding the fundamental concepts. Thanks!
If you are serious about getting into developing for the Kinect I would recommend getting this book:
http://www.amazon.com/Programming-Kinect-Windows-Software-Development/dp/0735666814
This goes through developing with the Kinect for Windows SDK from the ground up. There is a face tracking and an augmented reality example so I'm pretty sure you will be able to achieve your goal quite easily.
All the code from the book is here:
http://kinecttoolbox.codeplex.com/
Alternatively, there is an example here which pretty much is what you want to achieve:
http://www.codeproject.com/Articles/213034/Kinect-Getting-Started-Become-The-Incredible-Hulk
It is developed using the Beta version of the SDK, but the same priciples apply.
You can also check out the quick start videos here:
http://channel9.msdn.com/Series/KinectQuickstart
In summary, based on my own experience, I would spend some time going through the beginner examples either in the vides or the book (I found the book very good) just to get familiar with how to setup a simple Kinect project and how the different parts of the SDK work.
When you have developed some throwaway apps with the Kinect, I would then try tackling your project (although, the Incredible Hulk project above should get you most the way there!)
Best of luck with your project

Is there a virtual/dummy IMAQ camera for LabVIEW?

I'm writing LabVIEW software that grabs images from an IMAQ compatible GigE camera.
The problem: This is a collaborative project, so I only have intermittent access to the actual camera.I'd like to be able to keep developing this software even when the camera isn't present.
Is there a simple/fast way to create a virtual or dummy IMAQ camera in software? Ideally I'd like the dummy camera grab frames from an AVI or a stack of JPEG's. Something like this must exist, I just can't find it on Google.
I'm looking for something that won't take very long (e.g.< 2 hours effort) and that is abstracted away through the standard LabVIEW IMAQ interface, so that my software won't know or care whether its dealing with a dummy camera or an actual camera.
You can try this method using LabVIEW classes:
Hardware Emulation Using LabVIEW Classes
If you have the IMAQdx driver, you might consider just buying a cheap USB webcam for $10.
Use the IMAQdx driver (assuming you have it), and then insert the Vision Acquisition Express VI, and you can choose AVIs or even pics as a source.
Something like this: GigESim is a camera emulation software. Unfortunately it is proprietary and too expensive (>$500) for my own needs, but perhaps others will find this link useful.
Anyone know of a viable Open Source alternative?
There's an IP Camera emulator project that emulates IP camera with python. I haven't used it myself so i don't know if it can be used by IMAQ.
Let us know if it's good for you.
I know this question is really old, but hopefully this answer helps someone out.
IMAQdx also works with Windows DirectShow devices. While normally these are actual physical capture devices (think USB Webcams), there is no necessity that they have to be.
There are a few different pre-made options available on the web. I found using Open Broadcaster Studio and this Virtual Cam plugin to be easy enough. Basically:
Download and install both.
Load your media sources in the sources list.
Enable the VirtualCam stream (Tools > VirtualCam). Press Start.

Automated Webcam Application / Hardware Problems

I am starting to develop an automated webcam application. The goal is to automatically take pictures, do some image processing and then upload the results to a FTP site. All of these tasks seem simple.
However, I am having a hard time to find a decent camera. I don't want to use a simple webcam or hd-webcam because the image quality of still frames isn't very good.
I'm also having a hard time finding an affordable digital camera supporting USB snapshot or control.
My second concern is the development itself. I'm not quite sure which programming language to use. I have experience with AS3, Processing, Java and some simple C++ and Open CV.
Do you have a clue?
Regarding the camera, There are pretty good webcams that you can find, some with HD quality. look at the cameras on Logitech (I tested their API and it is quite good), A HD camera has a retail of $99 which is very cheap. If you are looking for something better I would go with Nikon as they also have a pretty good API for C#/C++. You can get a basic SLR with simple 28mm lens for $500. Don't use a PowerShot as Nikon stopped supporting their API. Whatever camera you decide to buy make sure a proper API is available, is being maintained and free.
Regarding development, I would go with C#/Java as they are easier than C++. There are quite allot of libraries for image processing for C#/Java, just make sure that the Camera comes with an API the fits your chosen language.
Good luck.
Generally (from experience) most USB cameras that show up as an imaging device through Windows can be used with JAI [Java Advanced Imaging]. Additionally [on the .net/c++ side], the same cameras can be used through DirectShow as a capture device. Java/C# will make development easier but expect to loose some performance [even with the best of optimizations]. Additionally you can only perform upto the speed of the camera and the data line running from the camera to the computer [USB1.0 will seriously limit a decent framerate]
first get the image in RAM:
If you are using CHDK, I suggest you get the image copied from camera memory to RAM by using supported scripting languages by CHDK - you can take help from the CHDK forum http://chdk.setepontos.com/index.php for this.
or if thats difficult you can continuously copy the image to hard disk and load in RAM from there. (you need to take care (delete) of massive images accumulated on hard disk in a short period of time !)
This sounds like a 'brute force' approach, but will get your work going while you are researching correct approach.
perform image processing:
once the image is in RAM, you can apply your image processing algorithms as usual e.g. using opencv library.
hope this helps you

Resources