train classifier to detect only eyelashes/nose features dlib and opencv? - opencv

I want to know that how can i train cascade classifier to detect only eyelashes or nose feature points in DLIB and [OPENCV][2]#
To be more clear i just want to extract some particular feature points to text file.
i tried extracting features but to no avail it gives all 68 points.
[2]: http://opencv.org/#I want to know that how can i train cascade classifier to detect only eyelashes or nose feature points in [A][1] and [B][2]#
1. To be more clear i just want to extract some particular feature points to text file.
2. i tried extracting features but to no avail it gives all 68 points.

For Dlib python api starting point should be this sample http://dlib.net/face_landmark_detection.py.html
As you see - it has face detection and shape prediction:
dets = detector(img, 1)
...
shape = predictor(img, d)
The shape object contain face shape as a list of feature point coordinates - parts. Each part is one point, for example shape.part(30) is a tip of nose. You can see their numbers on sample pictures from this blog
As I understand, you need simply save this points into file, that can be done like this:
with open("sample_file.txt", "w") as f:
for i in range(30, 32):
f.write("{};{}\n".format(i, shape.part(i)))
Where 30-32 are part numbers that you want to write to file

Related

what's dataset type in tensorflow object-detection api?

I am trying to do my own object detection using my own dataset. I started my first machine learning program from google tensorflow object detection api, the link is here:eager_few_shot_od_training_tf2_colab.ipynb
In the colab tutorial, the author use javascript label the images, the result like this:
gt_boxes = [
np.array([[0.436, 0.591, 0.629, 0.712]], dtype=np.float32),
np.array([[0.539, 0.583, 0.73, 0.71]], dtype=np.float32),
np.array([[0.464, 0.414, 0.626, 0.548]], dtype=np.float32),
np.array([[0.313, 0.308, 0.648, 0.526]], dtype=np.float32),
np.array([[0.256, 0.444, 0.484, 0.629]], dtype=np.float32)
]
When I run my own program, I use labelimg replace to javascript, but the dataset is not compatible.
Now I have two questions, the first one is what is the dataset type in colab tutorial? coco, yolo, voc, or any other? the second is how transform dataset between labelimg data and colab tutorial data? My target is using labelimg to label data then substitute in colab tutorial.
The "data type" are just ratio values based on the height and width of the image. So the coordinates are just ratio values for where to start and end the bounding box. Since each image is going to be preprocessed, that is, it's dimensions are changed when fed into the model (batch,height,width,channel) the bounding box coordinates must have the correct ratio as the image might change dimensions from it's original size.
Like for the example, the model expects images to be 640x640. So if you provide an image of 800x600 it has to be resized. Now if the model gave back the coordinates [100,100,150,150] for an 640x640, clearly that would not be the same for 800x600 images.
However, to get this data format you should use PascalVOC when using labelImg.
The typical way to do this is to create TFRecord files and decode them in your training script order to create datasets. However, you are free to choose whatever method you like Tensorflow dataset in order to train your model.
Hope this answered your questions.

OpenCV LBP recognizer on MNIST digits - haarcascade?

I am trying to implement the OpenCV LBPHFaceRecognizer() and make it work for the images of digits from the MNIST dataset. These images are 28 x 28 px and look like this:
But for this task I need an haarcascade.xml file which is able to recognize digits. In the OpenCV package I only find xml files which are suited for facerecognition and russian plate numbers.
Here is my code, I basicly just need to replace the cascadePath = "haarcascade_frontalface_default.xml" with an apropriate xml for digits, but where do I get one?
All in all I want to test facerecognition with numbers instead of faces. So an input image where a "1" is shown should be able to recognize all other "1"`s in the dataset.
For this, you need to train a cascade. Here two link to explain how to do this :
1 This is the Opencv documentation for opencv_traincascade which is the opencv app to train cascade (generate .xml)
2 This is a useful tutorial to train cascade with opencv. It explains what to do and give some tricks to generate input file.

facial expression classification in real time using SVM

I am currently working on a project where I have to extract the facial expression of a user (only one user at a time from a webcam) like sad or happy.
My method for classifying facial expressions is:
Use opencv to detect the face in the image
Use ASM and stasm to get the facial feature point
and now i'm trying to do facial expression classification
is SVM a good option ? and if it is how can i start with SVM :
how i'm going to train svm for every emotions using this landmarks ?
Yes, SVMs have been numerously shown to perform well in this task. There have been dozens (if not hundreads) of papers describing such procedures.
For example:
Simple paper
Longer paper
Poster about it
More complex example
Some basic sources of the SVMs themselves can be obtained on http://www.support-vector-machines.org/ (like books titles, software links etc.)
And if you are just interested in using them rather then understanding you can get one of basic libraries:
libsvm http://www.csie.ntu.edu.tw/~cjlin/libsvm/
svmlight http://svmlight.joachims.org/
if you are already using opencv,i suggest you use the built in svm implementation, training/saving/loading in python is as follow. c++ has corresponding api to do the same in about the same amount of code. it also has 'train_auto' to find best parameters
import numpy as np
import cv2
samples = np.array(np.random.random((4,5)), dtype = np.float32)
labels = np.array(np.random.randint(0,2,4), dtype = np.float32)
svm = cv2.SVM()
svmparams = dict( kernel_type = cv2.SVM_LINEAR,
svm_type = cv2.SVM_C_SVC,
C = 1 )
svm.train(samples, labels, params = svmparams)
testresult = np.float32( [svm.predict(s) for s in samples])
print samples
print labels
print testresult
svm.save('model.xml')
loaded=svm.load('model.xml')
and output
#print samples
[[ 0.24686454 0.07454421 0.90043277 0.37529686 0.34437731]
[ 0.41088378 0.79261768 0.46119651 0.50203663 0.64999193]
[ 0.11879266 0.6869216 0.4808321 0.6477254 0.16334397]
[ 0.02145131 0.51843268 0.74307418 0.90667248 0.07163303]]
#print labels
[ 0. 1. 1. 0.]
#print testresult
[ 0. 1. 1. 0.]
so you provide the n flattened shape models as samples and n labels and you are good to go. you probably dont even need the asm part, just apply some filters which are sensitive to orientation like sobel or gabor and concatenate the matrices and flatten them then feed them directly to svm. you probably can get maybe 70-90% accuracy.
as someone said cnn are an alternative to svms.here's some links that implement lenet5. so far,i find svms much simpler to get started.
https://github.com/lisa-lab/DeepLearningTutorials/
http://www.codeproject.com/Articles/16650/Neural-Network-for-Recognition-of-Handwritten-Digi
-edit-
landmarks are just n (x,y) vectors right? so why dont you try put them into a array of size 2n and simply feed them directly to the code above?
for example,3 training samples of 4 land marks (0,0),(10,10),(50,50),(70,70)
samples = [[0,0,10,10,50,50,70,70],
[0,0,10,10,50,50,70,70],
[0,0,10,10,50,50,70,70]]
labels=[0.,1.,2.]
0=happy
1=angry
2=disgust
You could check this code to get idea how this could be done using SVM.
You can find algorithm explained here

PCA in OpenCV and how to prepare data?

I just want to clarify something about PCA in OpenCV. Suppose, I have two rows of data (A, B).
A 3 8 7
B 2 4 5
If I wanted to create a PCA model in OpenCV, what must I do to the data? Do I have to subtract the means (e.g. subtract the mean of A from its data points) or does the PCA function do this?
Someone said that OpenCV PCA expects the data to be normalised (between 0 and 1). If so, how do I normalise?
Hope someone can clarify this for me as PCA in OpenCV is very badly documented on the Net.
Cheers...
The data for PCA in OpenCV needs not to be normalized. But if you already have the mean (from some previuos calculations), you can send it to the PCACompute() function to speed it up.
OpenCV refman:
PCACompute(data[, mean[, eigenvectors[, maxComponents ]]]) !mean, eigenvectors
Parameters
data – Input samples stored as the matrix rows or as the matrix columns.
mean – Optional mean value. If the matrix is empty ( noArray() ), the mean is computed
from the data.
There is a good article on data normalization on Wikipedia.
For complete documentation check out the opencv.pdf file that should be in the doc/ folder of your instalation. On some versions it is named opencv2refman.pdf
And also try to find the book "Learning OpenCV", by Gary Bradsky, it's more than well exlained.

OpenCV + HOG +SVM: help needed with SVM single feature vector

I try to implement a people detecting system based on SVM and HOG using OpenCV2.3. But I got stucked.
I came this far:
I can compute HOG values from an image database and then I calculate with LIBSVM the SVM vectors, so I get e.g. 1419 SVM vectors with 3780 values each.
OpenCV just wants one feature vector in the method hog.setSVMDetector(). Therefore I have to calculate one feature vector from my 1419 SVM vectors, that LIBSVM has calculated.
I found one hint, how to calculate this single feature vector: link
“The detecting feature vector at component i (where i is in the range e.g. 0-3779) is built out of the sum of the support vectors at i * the alpha value of that support vector, e.g.
det[i] = sum_j (sv_j[i] * alpha[j]) , where j is the number of the support vector, i
is the number of the components of the support vector.”
According to this, my routine works this way:
I take the first element of my first SVM vector, multiply it with the alpha value and add it with the first element of the second SVM vector that has been multiplied with alpha value, …
But after summing up all 1419 elements I get quite high values:
16.0657, -0.351117, 2.73681, 17.5677, -8.10134,
11.0206, -13.4837, -2.84614, 16.796, 15.0564,
8.19778, -0.7101, 5.25691, -9.53694, 23.9357,
If you compare them, to the default vector in the OpenCV sample peopledetect.cpp (and hog.cpp in the OpenCV source)
0.05359386f, -0.14721455f, -0.05532170f, 0.05077307f,
0.11547081f, -0.04268804f, 0.04635834f, -0.05468199f, 0.08232084f,
0.10424068f, -0.02294518f, 0.01108519f, 0.01378693f, 0.11193510f,
0.01268418f, 0.08528346f, -0.06309239f, 0.13054633f, 0.08100729f,
-0.05209739f, -0.04315529f, 0.09341384f, 0.11035026f, -0.07596218f,
-0.05517511f, -0.04465296f, 0.02947334f, 0.04555536f,
you see, that the default vector values are in the boundaries between –1 and +1, but my values exceed them far.
I think, my single feature vector routine needs some adjustment, any ideas?
Regards,
Christoph
The aggregated vector's values do look high.
I used the loadSVMfromModelFile() located in http://lnx.mangaitalia.net/trainer/main.cpp
I had to remove svinstr.sync(); from the code since it caused losing parts of the lines and getting wrong results.
I don't know much about the rest of the file, I only used this function.

Resources