Fingerprint matching in mobile devices (coding in OpenCV, deployment in Android) - opencv

I am intending to do my mainstream project in image processing domain. My project's aim is to enable user with finger print security in mobile devices. It involves:
Reading the fingerprint from user through mobile phone.
Matching the fingerprint image obtained with those available in the
database.
I am interested in coding my project in OpenCV and deploying in Android. I need clarifications on few of the following things:
Is this project feasible?
Is OpenCV apt for this project? (considered Matlab, but it doesn't have portability to Android)
Eclipse or Visual Studio?, which will be more suitable (considering the deployment in Android)
I am a beginner and need to learn OpenCV, so please guide me how to start my project (what are the tools, books for reference and the IDE's, SDK's to be used?)

Yes, for sure, but not easy.
OpenCV. Lots of stuff: http://opencv.org/
Eclipse, taking into account that you will be deploying it on Android.
Good luck!

You can easily work with OpenCV. Its not hard.
Learning Android portion - to me is the challenging part. If you want to use this finger recognition - you could do this by capturing an image from your front cam OR by using your back also (assuning you put a finger on the camera lens, while holding the phone).
Now to use this feature to unlock your phone - to me is a tough job. It involves more of android.
To start with - I would suggest you to build an app with a finger recognition algorithm. It should recognize your finger & may be take an action like display your name or something like this.
If you can do this.. the rest is all android, and how you play with android to get this to work.
I hope this helps and gives you a very high level answer.

Related

A-Frame: FOSS Options for widely supported, markerless AR?

A-Frame's immersive-ar functionality will work on some Android devices I've tested with, but I haven't had success with iOS.
It is possible to use an A-Frame scene for markerless AR on iOS using a commercial external library. Example: this demo from Zapworks using their A-Frame SDK. https://zappar-xr.github.io/aframe-example-instant-tracking-3d-model/
The tracking seems to be no where near as good as A-Frame's hit test demo (https://github.com/stspanho/aframe-hit-test), but it does seem to work on virtually any device and browser I've tried, and it is good enough for the intended purpose.
I would be more than happy to fallback to lower quality AR mode in order to have AR at all in devices that don't support immersive-ar in browser. I have not been able to find an A-Frame compatible solution for using only free/open source components for doing this, only commercial products like Zapworks and 8th Wall.
Is there a free / open source plugin for A-Frame that allows a scene to be rendered with markerless AR across a very broad range of devices, similar to Zapworks?
I ended up rolling my own solution which wasn't complete, but good enough for the project. Strictly speaking, there's three problems to overcome with getting a markerless AR experience on mobile without relying on WebXR:
Webcam display
Orientation
Position
Webcam display is fairly trivial to implement in HTML5 without any libraries.
Orientation is already handled nicely by A-FRAME's "magic window" functionality, including on iOS.
Position was tricky and I wasn't able to solve it. I attempted to use the FULLTILT library's accelerometer functions, and even using the readings with gravity filtered out I wasn't able to get a high enough level of accuracy. (It happened that this particular project did not need it)

Fix to get my wishes?

I have problems in myself when I drive cars 🚘 I forget to slowly in some way have cameras 🎥 speed in high ways so I thinked if possible to make IOS APP to fixing these problems I explain my thing in image but I can't convert to coding by this step ?
1-After to speed camera 🎥 100 Miter Alerts me app( (there are camera speeds pleas slow down your speed.))
2- just post code i have basic programming languages in swift.
I'm not sure if I got you wright, so please correct me if I'm wrong.
You want an iOS app that tells you if there is a speed camera on the road you're driving, right?
So you have some possibilities to achieve that:
you can have a look at the app store. There are lot of such apps (e.g. TomTom) (easiest way)
if you want to build your own app you can make a use of the navigation sdk provided by mapbox: https://www.mapbox.com/help/ios-navigation-sdk/ (some programming skills needed)
Build your own app from scratch (much work and advanced programming skills)
If you want to build your app by mapbox or on your own you'll need the GPS-locations of speed cameras like provided here: https://www.scdb.info/

Unity3D - OCR Number Recognition

Our initial use case called for writing an application in Unity3D (write solely in C# and deploy to both iOS and Android simultaneously) that allowed a mobile phone user to hold their camera up to the title of a magazine article, use OCR to read the title, and then we would process that title on the backend to get related stories. Vuforia was far and away the best for this use case because of its fast native character recognition.
After the initial application was demoed a bit, more potential uses came up. Any use case that needed solely A-z characters recognized was easy in Vuforia, but the second it called for number recognition we had to look elsewhere because Vuforia does not support number recognition (now or anywhere in the near future).
Attempted Workarounds:
Google Cloud Vision - works great, but not native and camera images are sometime quite large, so not nearly as fast as we require. Even thought about using the OpenCV Unity asset to identify the numbers and then send multiple much smaller API calls, but still not native and one extra step.
Following instructions from SO to use a .Net wrapper for Tesseract - would probably work great, but after building and trying to bring the external dlls into Unity I receive this error .Net Assembly Not Found (most likely an issue with the version of .Net the dlls were compiled in).
Install Tesseract from source on a server and then create our own API - honestly unclear why we tried this when Google's works so well and is actively maintained.
Has anyone run into this same problem in Unity and ultimately found a good solution?
Vuforia on itself doesn't provide any system to detect numbers, just letters. To solve this problem I followed the next strategy (just for numbers near of a common image):
Recognize the image.
Capture a Screenshot just after the target image is recognized (this screenshot must contain the numbers).
Send the Screenshot to an OCR web-service and get the response.
Extract the numbers from the response.
Use these numbers to do whatever you need and show AR info.
This approach solves this problem, but it doesn't work like a charm. Their success depends on the quality of the screenshot and the OCR service.

Augmented Reality Mask using Facial Recognition on Xbox Kinect with Kinect for Windows SDK

I am using an XBox Kinect with the Kinect for Windows SDK. I want to make an application that will augment a 3D mask (a 3D model of a mask made in 3DS Max) onto the face of anyone using the application. The application will be used in an exhibit locally. I have not tried much because I don't know where to start. So what I want to know is, is it currently possible to augment a 3DS Max model onto a live video stream using the facial recognition and skeletal tracking features in the newest Kinect for Windows SDK, and if so, how/where should I start trying to do/implement this? Any point in the right direction would be great. Thank you! PS And yes, I have read the UI guidelines and the facial documentation. My problem is one of not knowing where to start programming, not one of not understanding the fundamental concepts. Thanks!
If you are serious about getting into developing for the Kinect I would recommend getting this book:
http://www.amazon.com/Programming-Kinect-Windows-Software-Development/dp/0735666814
This goes through developing with the Kinect for Windows SDK from the ground up. There is a face tracking and an augmented reality example so I'm pretty sure you will be able to achieve your goal quite easily.
All the code from the book is here:
http://kinecttoolbox.codeplex.com/
Alternatively, there is an example here which pretty much is what you want to achieve:
http://www.codeproject.com/Articles/213034/Kinect-Getting-Started-Become-The-Incredible-Hulk
It is developed using the Beta version of the SDK, but the same priciples apply.
You can also check out the quick start videos here:
http://channel9.msdn.com/Series/KinectQuickstart
In summary, based on my own experience, I would spend some time going through the beginner examples either in the vides or the book (I found the book very good) just to get familiar with how to setup a simple Kinect project and how the different parts of the SDK work.
When you have developed some throwaway apps with the Kinect, I would then try tackling your project (although, the Incredible Hulk project above should get you most the way there!)
Best of luck with your project

Face Tracking and Virtual Reality

I'm searching for a face tracking system to use in an augmented reality project. I'm trying to find an open source and multi-platform application for it. The goal is to return the direction where the face is looking to interact with the virtual environment, (something like this video).
I've downloaded the sources of the above Johnny Lee's application and tried to use Free Track too, making my own headset (some kind of monster, hehe). But it's not good to be limited to infrared points in your head.
These days I've download FaceTrackNoIR, but when I launch the program I get "No DLL was found in the Waterfall procedure." that I'm actually trying to solve.
Anyone knows a good application, library, code, lecture, anything that could help me to find a good path for this?
Thank you all!
I'll try to post results someday :-)
I would take a look at OpenCV. It is a general purpose machine-learning and computer vision C++ library. One of the examples in the download is a real-time face tracker that connects to a video camera connected to your computer and draws squares around any faces in the camera view.

Resources