Facial recognition iOS - ios

I have to build an app to recognize person on photo and find if this person is stored in phone book. I made face detection, I know how to take photos from people stored in phone book but I stuck on recognizing if the person is the same..
I want to ask what would be the easiest way to do it - i saw that iOS 10 comes with facial recognition in Photos app - is there any API to use facial recognition in iOS 10?
or should I use Open CV?

Face detection can be performed natively, as you've already found out, but to identify the faces you'll have to use something like OpenCV as there's no API available to do this presently
There's some information about how OpenCV can be used to recognise faces here.
Information about how to use it with Xcode here.

I built something similar some years ago.
I would suggest you look into perceptual hashing as it's an easy and inexpensive way of matching images.

You can have a look at the documentation, but this way your app will work only on iOS 10 and future versions.
If you plan to support older iOS versions or make also an Android app go for OpenCV

Related

Mixed reality in Android device using Unity(w/ OpenCV?)

I'm a fourth year student currently doing my thesis project(Noob in programming). What I want to ask is that is it possible to create a mobile app in Unity that has the following specifications:
- Mixed Reality
- Hand Tracking
- has an A.I. (Image comparison)
I've done some research but all I've saw is only in AR. If it is possible what course of action should I take?
Yes, it's possible.
I suggest you to use:
ARStuff:
Vuforia, EasyAR or ARCore/ARKit (depending of your mobile target).
HandTracking:
One of the best SDK's I try about HandTracking with AR is Manomotion (https://www.manomotion.com/)
Image Comparison:
As you suggest, OpenCV is a good choice, there are some "bridges" to use OpenCV on Unity Store.

Effort for building an Augmented Reality SDK with OpenCV

Our company is planning to start building some AR apps for Android and iOS. As the first step we need to decide whether we are going to use a Opensource SDK like ARToolKit or to go for a commercialized product like Vuforia, Wikitude, CraftAR, KudanAR etc or whether we should start writing our own AR sdk based on libraries like OpenCV/OpenGL etc..
I have read many articles and comparisons about different SDKs available and have a good idea about what each of them can do and how much they are going to cost. e.g.
http://socialcompare.com/en/comparison/augmented-reality-sdks
https://www.linkedin.com/pulse/dozens-more-augmented-reality-sdks-than-you-think-here-offermann
In the past we have used Vuforia and we have it at the top of our list. But the main issue is the pricing.
So I would like to know if any of you have written or tried to build your own AR sdk based on OpenCV and what type of an effort it will be. To support features like image and 3D object tracking and augmenting 2D and 3D objects. And need to support iOS and Android devices.
This Augmented Reality SDK with OpenCV has some basic guidelines on how to start.
Mainly what I would like to know is if a software Engineer with about 5+ years of good programming skills try to do this, how much of an effort will that be? Will it be like 1 month work or a 6 months work or even with 12 months of work will it be difficult to get closer to what Vuforia SDK can does?

Augmented Reality Mask using Facial Recognition on Xbox Kinect with Kinect for Windows SDK

I am using an XBox Kinect with the Kinect for Windows SDK. I want to make an application that will augment a 3D mask (a 3D model of a mask made in 3DS Max) onto the face of anyone using the application. The application will be used in an exhibit locally. I have not tried much because I don't know where to start. So what I want to know is, is it currently possible to augment a 3DS Max model onto a live video stream using the facial recognition and skeletal tracking features in the newest Kinect for Windows SDK, and if so, how/where should I start trying to do/implement this? Any point in the right direction would be great. Thank you! PS And yes, I have read the UI guidelines and the facial documentation. My problem is one of not knowing where to start programming, not one of not understanding the fundamental concepts. Thanks!
If you are serious about getting into developing for the Kinect I would recommend getting this book:
http://www.amazon.com/Programming-Kinect-Windows-Software-Development/dp/0735666814
This goes through developing with the Kinect for Windows SDK from the ground up. There is a face tracking and an augmented reality example so I'm pretty sure you will be able to achieve your goal quite easily.
All the code from the book is here:
http://kinecttoolbox.codeplex.com/
Alternatively, there is an example here which pretty much is what you want to achieve:
http://www.codeproject.com/Articles/213034/Kinect-Getting-Started-Become-The-Incredible-Hulk
It is developed using the Beta version of the SDK, but the same priciples apply.
You can also check out the quick start videos here:
http://channel9.msdn.com/Series/KinectQuickstart
In summary, based on my own experience, I would spend some time going through the beginner examples either in the vides or the book (I found the book very good) just to get familiar with how to setup a simple Kinect project and how the different parts of the SDK work.
When you have developed some throwaway apps with the Kinect, I would then try tackling your project (although, the Incredible Hulk project above should get you most the way there!)
Best of luck with your project

Fingerprint matching in mobile devices (coding in OpenCV, deployment in Android)

I am intending to do my mainstream project in image processing domain. My project's aim is to enable user with finger print security in mobile devices. It involves:
Reading the fingerprint from user through mobile phone.
Matching the fingerprint image obtained with those available in the
database.
I am interested in coding my project in OpenCV and deploying in Android. I need clarifications on few of the following things:
Is this project feasible?
Is OpenCV apt for this project? (considered Matlab, but it doesn't have portability to Android)
Eclipse or Visual Studio?, which will be more suitable (considering the deployment in Android)
I am a beginner and need to learn OpenCV, so please guide me how to start my project (what are the tools, books for reference and the IDE's, SDK's to be used?)
Yes, for sure, but not easy.
OpenCV. Lots of stuff: http://opencv.org/
Eclipse, taking into account that you will be deploying it on Android.
Good luck!
You can easily work with OpenCV. Its not hard.
Learning Android portion - to me is the challenging part. If you want to use this finger recognition - you could do this by capturing an image from your front cam OR by using your back also (assuning you put a finger on the camera lens, while holding the phone).
Now to use this feature to unlock your phone - to me is a tough job. It involves more of android.
To start with - I would suggest you to build an app with a finger recognition algorithm. It should recognize your finger & may be take an action like display your name or something like this.
If you can do this.. the rest is all android, and how you play with android to get this to work.
I hope this helps and gives you a very high level answer.

ios take multiple pictures without user interaction

I have been researching for this and read different opinions but i wanted to ask you more specific questions.
In my application i want to take 3 or 4 frames from the camera stream to process them without making the user press a button multiple times (and as fast as posible), i do this already on the android version, because android provides a callback method that contains each frame of the camera feed.
I have seen some people using the iOS AVFoundation (classes AVCaptureDevice, AVCaptureInput) to perform this tasks, but as far as i know, this is supported from version 4.0 of iOS.
Is there another way to do this and support older iOS versions? like 3.X?
how fast can the different pictures be taken?
Are there still problems using this Framework to get Apps/updates accepted on the App Store?
Thanks a lot,
Alex.
You should use the new way (AVCaptureInput), as only a few percent of users still use iOS 3. iOS adoption is much faster than android adoption. Early last winter about 90% had already upgraded to 4. At this point even 4.0 is likely in the small minority as well.
One pre-ios-4 way to do it was by opening a UIImagePickerController and taking screenshots. Depending on the exact version targeted, there are sometimes ways to disable the camera overlays.
I see this question: iPhone: Get camera preview

Resources