how to detect human body in Unity3D - opencv

I want to detect the whole human body in Unity3D is there any way to do that ? I think there is an easy way to do that in opencv. but I'am pretty new to Unity and I don't know how to use opencv in Unity3D. And can I use OpenCV in Unity3D?

I don't see an easier way to do that if you are new(even if you are pro i think you would still use openCV)
You can use openCv in unity, there is an asset on asset store, it should be easy to implement and with any luck you will have an ready example for detecting a human body. Sorry to say that to you but the asset is paid.
Of course you can always integrate your openCv in Unity with your solution :)

I don't know what is the meaning "I want to detect the whole human body in Unity3D" But sure you can use openCV in unity and you require to write rapper for it and fortunately there is a paid package available at assetstore.
OpenCV for unity by Enox Software
If you have time and want to avoid expensive plugin then, you can write your own integration for unity using this
Using OpenCV with unity blog
OpenCV + Unity3D integration
OpenCV (EmguCV) integration in unity

Related

CardIO conflicts with OpenCV framework

I thoroughly enjoy using Card.IO but in order for me to use it, it would have to be decoupled from it's OpenCV .a files, and instead link to the OpenCV framework. Most people have moved on from OpenCV2 to OpenCV3, and this library is stuck in the past. There appears to be no way to work around this, since your dependency is baked into your .a file. (calling creators of Card IO)
Has anyone else been able to work around this? Or is this library junk now if you use OpenCV?
Thanks,
Kevin

iOS Plugin Unity

I am currently working on a augmented reality project. I would like to place some virtual objects on a human body. Therefore I created an iOS facetracking app(with openCV; C++) which I want to use as a plugin for Unity. Is there a way to build a framework from an existing iOS app? Or do I have to create a new Xcode project and create a cocoa touch framework and copy paste the code from the app into this framework? I am a little bit confused here. Will the framework have camera access?
My idea was to track the position of a face and to send the position to unity, so that I can place some objects on it. But I do not know how to do that. Can anybody help?
nice greets.
as far as I know you need to make your Unity project, and use assets like OpenCV, but it doesn´t allow you to track the human body (without markers).
About building a fremwork starting from an iOS app, first time I heard that!

Augmented Reality + Realtime Image Recognition on iPhone

I am not sure if this question has been asked before or not, but I want to know what frameworks do I need to explore in order to do augmented reality with image recognition for iOS.
Basically to build something like this, http://www.youtube.com/watch?v=GbplSdh0lGU
I am using Wikitude's SDK which enables me to use it in PhoneGap as well. Wikitude uses Vuforia's SDK for Image Recognition. Compare Wikitude and Vuforia for their features!
Here is a good SDK which I know for Augmented Reality. You will find tutorials and demos there.
Note: Adding vuforia SDK with your app is difficult so you can go with metaio SDK but modification(changing target image is easy in Vuforia)
http://www.metaio.com/sdk/
https://developer.vuforia.com/resources/sample-apps

Can Opencv developed in c/cpp - Run in IOS?

I am developing an image processing application in Centos with OpenCV using C/C++ coding. My intension is to have a single development platform for Linux and IOS (IPAD).
So if I start the development in a Linux environment with OpenCV installed ( in C/CPP ),Can I use the same code in IOS without going for Objective-C? I don't want to put dual effort for IOS and Linux, so how to achieve this?
It looks like it's possible. Compiling and running C/C++ on iOS is no problem, but you'll need some Objective-C for the UI. When you pay some attention to the layering/abstraction of your modules, you should be able to share most/all core code between the platforms.
See my detailed answer to this question:
iOS:Retrieve rectangle shaped image from the background image
Basically you can keep most of your CPP code portable between platforms if you keep your user interface code separate. On iOS all of the UI should be pure objective-C, while your openCV image processing can be pure C++ (which would be exactly the same on linux). On iOS you would make a thin ObjC++ wrapper class that mediates between Objective-C side and the C++ side. All it really does is translate image formats between them and send data in and out of C++ for processing.
I have a couple of simple examples on github you might want to take a look at: OpenCVSquares and OpenCVStitch. These are based on C++ samples distributed with openCV - you should compare the C++ in those projects with the original samples to see how much altering was required (hint: not much).

Implementing ASIFT in Android

I am new to both openCV and Android. I have to detect objects in my project. So, I have decided to use ASIFT for the same. However, the code they have given here is very lengthy. It contains lots of C file. It also doesn't have openCV support.
Some search on the SO itself suggested that it is easier to connect the ASIFT code to the openCV library, but I can't figure out how to do that. Can anyone help me by giving some link or by telling the steps that I should use to add ASIFT to my openCv library, which I can further utilize in making my Android application?
Also, I would like to know whether using Android NDK along with JNI to make calls to the C files or using Android SDK along with binary package for my android project(Object Detection) would be a suitable option for me?
Finally , I solved my problem by using the source code given at the website of ASIFT developers. I compacted all the source files together to make my own library using make. I then called the required function from the library using JNI.
It worked for me, but the execution is taking approximate 2 mins on an Android device. Anyone having some idea about ways to reduce the running time ?
They used very simple and slow brute force matching (just for proving of concept). You can use FLANN library and it will help a lot. http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html

Resources