I am trying to build face recognition system using siemens network. I am using PubFig database. It is a large, real-world face dataset consisting of 58,797 images of 200 people collected from the internet. So from scratch I am trying to build this face recognition system using siemes network. So is this approach is good for face recognition from scratch? (I don't want to use transfer learning I want to build it from scratch)
See Papers With Code https://paperswithcode.com/task/face-recognition for the State of the Art on this task. A review of how other people have approached a particular task is the best starting point for developing a new solution.
Related
I am stuck with a project on detecting suspicious facial expression along with detection of foreign objects (e.g. guns, metal rods or anything). I know nothing much of ML or image processing. I need to complete the project as soon as possible. It would be helpful if anyone could direct me with some things.
How can I manage a dataset?
Which type of code should I follow?
How do I present the final system?
I know it is a lot to ask but any amount of help is appreciated.
I have tried to train a machine using transfer learning following this link in in youtube:
https://www.youtube.com/watch?v=avv9GQ3b6Qg\
The tutorial uses mobilenet as the model and a known dataset of 7 subset (Angry, Disgust, Fear, Happy, Neutral, Sad, Surprised). I was able to successfully train the model get the face detected based on these 7 emotions.
How do I further develop it to achieve what I want?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to research about face recognition. But not just "recognizing that there's a face feature on the video" but also "recognizing whose face it is", on iOS Swift language. So far, the resource I get on the internet about this is only detecting, not truly face recognition (which I suspect there must be some kind of machine learning training and database to store all those training results for future recognition), like this tutorial using Vision framework, or this tutorial about face features, but none of them has machine learning. This tutorial talks about machine learning framework, OpenML, but no details whatsoever.
I did find a promising article about face recognition using Local Binary Patterns Histograms, even though the recognition part is very short, but it didn't say anything about where the data model stored, or whether I can send the "trained data" to the server to be integrated with the training data already in the server. And then there also that rumor of OpenCV being native on C++, and only can be implemented in Objective C++ and not on Swift?
To have a centralized face recognition database (by which a device train to recognize a face, upload the result to the server, and then another device can use that information to recognize the face earlier), I suspect the training is done on the client side (iOS), but the recognition is done on the server side (the device detect a face, upload a cropped image of that face to the server, and the server do a facial recognition on that image). Is that correct? Or is it more possible and practical to download all the server training data to the device, and then use that to do face recognition on the client? Or all the training and recognizing are done on the server?
This all is only in my head, but I actually don't know where to start looking for for my use case. I feel like the one that has to train and store model and do all the recognition is the server, where the client just only sent the detected face.
What you're talking about, if I understand it correctly, is that you're looking for FaceID. Unfortunately, Apple only gives this feature to developers as a means to authenticate the user and not recognize the face, per se. What you can do is take a picture of the face in the app; create a machine learning model with Apple's CoreML framework and train the model to recognize faces. But the problem with this is that you'd have to virtually train the model with every face, which is not possible. If you're keen, you can write your own algorithm for face recognition and analyze the taken picture with your model. For this, you'd need a really large amount of data.
Edits
Explore the following links. Maybe they can help.
https://gorillalogic.com/blog/how-to-build-a-face-recognition-app-in-ios-using-coreml-and-turi-create-part-1/
https://blog.usejournal.com/humanizing-your-ios-application-with-face-detection-api-kairos-60f64d4b68f7
Keep in mind, however, that these will not be secure like FaceID is. They can easily be spoofed.
I and my partner decided to implement a traffic light recognition program as a student project.
But we are absolute beginners with computer vision and have no idea how to start with this. (What only we know is to use OpenCV)
Should we firstly learn image recognition or just start with object tracking?
Our ideal production is to recognize traffic light in a video but not just an image.
In my opinion, you should take a serious course about computer vision before going deeper.
The video is just a sequence of picture. So you could use opencv to read each image then process them.
For you current project, a simple object detection using hog feature should be more than enough.
There's tutorial at http://www.hackevolve.com/create-your-own-object-detector/ . It's very easy to understand and source code is also available, so you can move quick.
Good luck.
I have a project which I should classify the data coming from several sensors(time series based data) like gyroscope to several classes. I have used several classifiers including SVM, decision tree, neural networks, KNN,... in a batch scenario. My ultimate goal is to find a real-time classifier which is accurate, light and also has the ability to improve itself to implement it on my device which has limited sources(CPU, RAM,..). I was thinking a semi-supervised classifier since I can save a few labeled data on my device and use the future data points to improve my classifier. Does anyone have any recommendation or experience in this regard?
Online learning is very challenging. I recommend you steer away from now and use batch learning. You can always update the model as you update the mobile app or just make the app look for a new updated model on your server every x days.
Now, how to run a machine learning algorithm efficiently on a phone with limited resources. First, you have to identify which platform you are using. I assume you want to get a platform agnostic answer. Most ML algorithms (except lazy learning ones) can run efficiently on smartphone, have a look at this benchmarking experiment.
You have several options here:
iOS: Here's a list of all machine learning libraries available publicly.
Android: Weka for Android, this lib has a huge number of ML algorithms.
Platform agnostic deep learning: Tensorflow, you can export your models to TensorFlow lite (tutorial) and deploy them on any mobile OS and Caffe2 to train deep learning models and export them to any smartphone OS.
I use VNImageRequestHandler and VNDetectRectanglesRequest to handle request to find rectangles in a image. But since Vision in iOS11 only provide barcode、rectangle、face finding,but I want to find cars in an image ,what should I change code to find specify object in an image?
If you’re looking for Apple to create an API named VNDetectCarRequest you should probably file a feature request. (And if it happens, I’m sure the “Apple is making a car!” rumor mill will start up again...)
For general-purpose image recognition, the path to take with Vision is to use VNCoreMLRequest and supply a machine learning model trained for the image recognition task you have in mind.
On the native programming side, all image recognition/classification tasks are the same — you can start by reusing Apple’s Classifying Images with Vision and Core ML sample code, which sets up VNCoreMLRequest and handles the VNClassificationObservation results it produces. The special sauce that changes a general “what is this” classifier into a “hotdog or not a hotdog” classifier or a “what kind of vehicle is this (if it’s one at all)” classifier is all in the model.
There might be a machine learning model that already does the task you’re looking for out there — if you find one, you can wrap it in a Core ML Model file using the scripts Apple provides.
Otherwise, you’ll need to look at one of the general purpose image classifier models out there (again, there are several already conveniently gathered on developer.apple.com) and work on specializing / retraining it to your more specific task. That part of your work is outside Apple’s API ecosystem, and there are many possible options. Web searches for “train caffe image model” or “train keras image model” or similar should be helpful there.
Once you’ve trained your model, use the Core ML tools to get it into Core ML to use with Vision.