I have a set of Images and Yolo Annotation files(in txt format) for the validation.
How to properly use -map argument (something like below) to get the mAP score of the validation dataset using the darknet framework (Repo:https://github.com/AlexeyAB/darknet)?
/darknet detector map cfg/test.data cfg/test_tiny.cfg backup\my_yolov3_tiny_final.weights
Will it be possible to derive the confusion matrix from this?
/darknet detector map detector map cfg/test.data cfg/test_tiny.cfg backup\my_yolov3_tiny_final.weights where test.data is same data file which you was training on. You can use -map flag when you training so after 4 epochs( 4 * train_size / batch) you will se mAP#0.5.
I couldn't edit the code above. For more proper command:
./darknet detector map test.data yolov4-tiny.cfg yolov4-tiny_last.weights
Related
I trained a CNN model by using Torch (Lua) and then loaded it on OpenCV Java. The model was structured to get 112*112 as an input image. However I accidentally put 128*128 to the model.
I expected an error, but the model just ran smoothly and made some results. Why is it? Does OpenCV just ignore surplus parts of the input?
Below is a part of my code:
Mat bgrImage = bgrImages.get(i);
Mat inputBlob = Dnn.blobFromImage(bgrImage);
objectNet.setInput(inputBlob);
Mat fwdResultMat = objectNet.forward();
I have an image with 8 channels.I have a conventional algorithm where weights are added to each of these channels to get an output as '0' or '1'.This works fine with several samples and complex scenarios. I would like implement the same in Machine Learning using CNN method.
I am new to ML and started looking out the tutorials which seem to be exclusively dealing with image processing problems- Hand writing recognition,Feature extraction etc.
http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/
https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/neural_networks.html
I have setup the Keras with Theano as background.Basic Keras samples are working without problem.
What steps do I require to follow in order achieve the same result using CNN ? I do not comprehend the use of filters,kernels,stride in my use case.How do we provide Training data to Keras if the pixel channel values and output are in the below form?
Pixel#1 f(C1,C2...C8)=1
Pixel#2 f(C1,C2...C8)=1
Pixel#3 f(C1,C2...C8)=0 .
.
Pixel#N f(C1,C2...C8)=1
I think you should treat this the same way you use CNN to do semantic segmentation. For an example look at
https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf
You can use the same architecture has they are using but for the first layer instead of using filters for 3 channels use filters for 8 channels.
For the loss function you can use the same loos function or something that is more specific for binary loss.
There are several implementation for keras but with tensorflow
backend
https://github.com/JihongJu/keras-fcn
https://github.com/aurora95/Keras-FCN
Since the input is in the form of channel values,that too in sequence.I would suggest you to use Convolution1D. Here,you are taking each pixel's channel values as the input and you need to predict for each pixel.Try this
eg :
Conv1D(filters, kernel_size, strides=1, padding='valid')
Conv1D()
MaxPooling1D(pool_size)
......
(Add many layers as you want)
......
Dense(1)
use binary_crossentropy as the loss function.
I am trying to implement the OpenCV LBPHFaceRecognizer() and make it work for the images of digits from the MNIST dataset. These images are 28 x 28 px and look like this:
But for this task I need an haarcascade.xml file which is able to recognize digits. In the OpenCV package I only find xml files which are suited for facerecognition and russian plate numbers.
Here is my code, I basicly just need to replace the cascadePath = "haarcascade_frontalface_default.xml" with an apropriate xml for digits, but where do I get one?
All in all I want to test facerecognition with numbers instead of faces. So an input image where a "1" is shown should be able to recognize all other "1"`s in the dataset.
For this, you need to train a cascade. Here two link to explain how to do this :
1 This is the Opencv documentation for opencv_traincascade which is the opencv app to train cascade (generate .xml)
2 This is a useful tutorial to train cascade with opencv. It explains what to do and give some tricks to generate input file.
I want to know that how can i train cascade classifier to detect only eyelashes or nose feature points in DLIB and [OPENCV][2]#
To be more clear i just want to extract some particular feature points to text file.
i tried extracting features but to no avail it gives all 68 points.
[2]: http://opencv.org/#I want to know that how can i train cascade classifier to detect only eyelashes or nose feature points in [A][1] and [B][2]#
1. To be more clear i just want to extract some particular feature points to text file.
2. i tried extracting features but to no avail it gives all 68 points.
For Dlib python api starting point should be this sample http://dlib.net/face_landmark_detection.py.html
As you see - it has face detection and shape prediction:
dets = detector(img, 1)
...
shape = predictor(img, d)
The shape object contain face shape as a list of feature point coordinates - parts. Each part is one point, for example shape.part(30) is a tip of nose. You can see their numbers on sample pictures from this blog
As I understand, you need simply save this points into file, that can be done like this:
with open("sample_file.txt", "w") as f:
for i in range(30, 32):
f.write("{};{}\n".format(i, shape.part(i)))
Where 30-32 are part numbers that you want to write to file
I am using jlibsvm to do SVM for regression .My data set is very small (42 samples) . When I use the dataset to create the model using epsilon SVR with sigmoid kernel then no support vectors are generated.
This is what I get in my model file :
svm_type epsilon_svr
kernel_type sigmoid
gamma 0.02380952425301075
coef0 0.0
label
rho -66.42803
total_sv 0
probA -1.0
SV
When I use some other data set on the libsvm website I get a model file with support vectors fine.
Can someone please suggest why no support vectors are being generated for my data set ?
My data set file is formatted right so no issues there...
This could mean that the best found classification, given your data and the hyperparameters, is to assign the same label to all samples.
Are your samples unbalanced? What's the number of positive and negative samples? You might want to try to add a weighting to positive/negative samples to account for that
It could also be the samples are hard to separate given their structure and the kernel type. Have you tried a different structure?
With only 42 data samples, maybe you could add them to your question and get better answers.