OpenCV GPU FaceDetector Example crashing on random frames - opencv

I'm trying out the Haarcascade based FaceDetection using the GPU module in OpenCV 2.3.1.
My code is compiling and sometimes it shows the initial frame with one or more ROI-rectangles drawn onto the output frame to highlight detected objects.
But after the 2nd or 3rd repeated call of this detector method it just crashes. The compiler says SIGABRT. Any suggestions on this?
Here's the code:
cv::Mat ProcessorWidget::detectGPU(Mat &img) {
cv::gpu::CascadeClassifier_GPU cascade_gpu(QFileDialog::getOpenFileName(this).toStdString());
img.copyTo(image_cpu);
gpu::GpuMat image_gpu(image_cpu);
gpu::GpuMat objbuf;
int detections_number = cascade_gpu.detectMultiScale( image_gpu,
objbuf, 1.2);
Mat obj_host;
// download only detected number of rectangles
objbuf.colRange(0, detections_number).download(obj_host);
Rect* faces = obj_host.ptr<Rect>();
for(int i = 0; i < detections_number; ++i)
cv::rectangle(image_cpu, faces[i], Scalar(255));
return image_cpu;
}
Another point is that some of the Haarcascade Classifiers coming with OpenCV will always crash my application when i use them. But some other classifiers always work on the first frame and then crash a few frames later.
BTW I initialize the classifier from within this method just for testing purposes. Inititalizing it just once when constructing the ProcessorObject didn't help either ...
Could the classifier-XML's be incompatible somehow?
Thanks in Advance!

directly from the docs:
Only the old haar classifier (trained by the haar training application) and NVIDIA’s nvbin are supported.

Related

OpenCV Java - dnn::net forward resolution mismatch

I trained a CNN model by using Torch (Lua) and then loaded it on OpenCV Java. The model was structured to get 112*112 as an input image. However I accidentally put 128*128 to the model.
I expected an error, but the model just ran smoothly and made some results. Why is it? Does OpenCV just ignore surplus parts of the input?
Below is a part of my code:
Mat bgrImage = bgrImages.get(i);
Mat inputBlob = Dnn.blobFromImage(bgrImage);
objectNet.setInput(inputBlob);
Mat fwdResultMat = objectNet.forward();

Bruteforcematcher with FREAK extractor gives zero matches

Hello guys hope you are doing well. I am implementing a system that can detect object from given image frame in opencv 2.4.8. Currently I am dealing with FREAK algorithm because it is free. So as mentioned in tutorials and opencv docs I created objects of fastfeaturedetector and FREAK class
FastFeatureDetector detector(30);
FREAK extractor;
from here on code is most similar to the opencv example http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html
instead of FLANN I used Bruteforce
BruteForceMatcher<Hamming> matcher;
for both object and real time image frame I find key points and descriptors
detector.detect(frame,keypoints_frame);
descriptors_frame.convertTo(descriptors_frame,CV_32F);
extractor.compute(frame, keypoints_frame, descriptors_frame);
Then I match descriptors using "match"-
matcher.match( descriptors_object, descriptors_frame, matches);
When I check the size of matches(which is defined as std::vector< DMatch > matches;) IT IS ZERO for image frame that has object(object to be detected).So I can't perform findhomography.(but the code works up to finding matches )
But when I run drawmatches it draws the the points on the detected object on the given frame. When I run the same algorithm with surf, BRISK they gives match size >0 and then I can perform find homography with it and proceed.
Can you please tell me why I am getting zero matches for FREAK?
What can I do to avoid that and to perform find homography?
The code works well with surf and BRISK (but they give false results too but I can deal with them)
Thanks in advance!!
note:- I think my question is clearer to you. Please let me know and I will edit as you want.

Extract point descriptors from small images using OpenCV

I am trying to extract different point descriptors (SIFT, SURF, ORB, BRIEF,...) to build Bag of Visual words. The problem seems to be that I am using very small images : 12x60px.
Using a dense extractor I am able to get some keypoints, but then when extracting the descriptor no data is extracted.
Here is the code :
vector<KeyPoint> points;
Mat descriptor; // descriptor of the current image
Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("BRIEF");
Ptr<FeatureDetector> detector(new DenseFeatureDetector(1.f,1,0.1f,6,0,true,false));
image = imread(filename, 0);
roi = Mat(image,Rect(0,0,12,60));
detector->detect(roi,points);
extractor->compute(roi,points,descriptor);
cout << descriptor << endl;
The result is [] (with BRIEF and ORB) and SegFault (with SURF and SIFT).
Does anyone have a clue on how to densely extract point descriptors from small images on OpenCV ?
Thanks for your help.
Indeed, I finally managed to work my way to a solution. Thanks for the help.
I am now using an Orb detector with initalised parameters instead of a random one, e.g:
Ptr<DescriptorExtractor> extractor(new ORB(500, 1.2f, 8, orbSize, 0, 2, ORB::HARRIS_SCORE, orbSize));
I had to explore the documentation of OpenCV thoroughly before finding the answer to my problem : Orb documentation.
Also if people are using the dense point extractor they should be aware that after the descriptor computing process they may have less keypoints than produced by the keypoint extractor. The descriptor computing removes any keypoints for which it cannot get the data.
BRIEF and ORB use a 32x32 patch to get the descriptor. Since it doesn't fit your image, they remove those keypoints (to avoid returning keypoints without descriptor).
In the case of SURF and SIFT, they can use smaller patches, but it depends on the scale provided by the keypoint. In this case, I guess they have to use a bigger patch and the same as before happens. I don't know why you get a segfault, though; maybe the SIFT/SURF descriptor extractors don't check that keypoints are inside the image boundaries, as BRIEF/ORB ones do.

How to utilize cascade classifier in custom detection algorithm?

So, I've created my own HoG features extractor and a simple sliding window algorithm, which pseudo code looks like this:
for( int i = 0; i < img.rows; i++ ) {
for( int j = 0; j < img.cols; j++ ) {
extract image ROI from the current position
calculate features for the ROI
feed the features into svm.predict() function, to determine whether it's human or not
}
}
However since it's very slow (especially when you include different scales), I've decided to train some cascade classifiers using openv_traincascade command on my positive and negative samples.
opencv_traincascade provides me with cascade.xml, params.xml, and a number of stages.xml files
My question is how do I utilize this trained cascade classifiers in my detection loop ?
Edit: No, not detectMultiScale. The reason I'm using cascade classifier is just to speed up detection of non-object, I still need to use my own algorithm to calculate the score of probable ROI. Sorry for the confusion
Following opencv doc on Object Detection, you have to create the cascade detector object and ::load the cascade you want to apply (the xml file you have generated). ::detectMultiScale is used to fill a std::vector of detected object from current frame by sliding windows of different scales and sizes and merging high confident close samples.
Code here!
Same question answered here ?
In order to use the Cascade Classifier you have to call the CascadeClassifier::load function (you pass the path to the generated XML file as argument). And whenever you want to check whether the object of interest is existing or not you should either call the CascadeClassifier::detect or CascadeClassifier::detectMultiScale functions.

Extracting HoG Features using OpenCV

I am trying to extract features using OpenCV's HoG API, however I can't seem to find the API that allow me to do that.
What I am trying to do is to extract features using HoG from all my dataset (a set number of positive and negative images), then train my own SVM.
I peeked into HoG.cpp under OpenCV, and it didn't help. All the codes are buried within complexities and the need to cater for different hardwares (e.g. Intel's IPP)
My question is:
Is there any API from OpenCV that I can use to extract all those features / descriptors to be fed into a SVM ? If there's how can I use it to train my own SVM ?
If there isn't, are there any existing libraries out there, which could accomplish the same thing ?
So far, I am actually porting an existing library (http://hogprocessing.altervista.org/) from Processing (Java) to C++, but it's still very slow, with detection taking around at least 16 seconds
Has anyone else successfully to extract HoG features, how did you go around it ? And do you have any open source codes which I could use ?
Thanks in advance
You can use hog class in opencv as follows
HOGDescriptor hog;
vector<float> ders;
vector<Point> locs;
This function computes the hog features for you
hog.compute(grayImg, ders, Size(32, 32), Size(0, 0), locs);
The HOG features computed for grayImg are stored in ders vector to make it into a matrix, which can be used later for training.
Mat Hogfeat(ders.size(), 1, CV_32FC1);
for(int i=0;i<ders.size();i++)
Hogfeat.at<float>(i,0)=ders.at(i);
Now your HOG features are stored in Hogfeat matrix.
You can also set the window size, cell size and block size by using object hog as follows:
hog.blockSize = 16;
hog.cellSize = 4;
hog.blockStride = 8;
// This is for comparing the HOG features of two images without using any SVM
// (It is not an efficient way but useful when you want to compare only few or two images)
// Simple distance
// Consider you have two HOG feature vectors for two images Hogfeat1 and Hogfeat2 and those are same size.
double distance = 0;
for(int i = 0; i < Hogfeat.rows; i++)
distance += abs(Hogfeat.at<float>(i, 0) - Hogfeat.at<float>(i, 0));
if (distance < Threshold)
cout<<"Two images are of same class"<<endl;
else
cout<<"Two images are of different class"<<endl;
Hope it is useful :)
I also wrote the program of 2 hog feature comparing with the help of the above article.
And I apply this method to check ROI region changing or not.
Please refer to the page here.
source code and simple introduction
Here is GPU version as well.
cv::Mat temp;
gpu::GpuMat gpu_img, descriptors;
cv::gpu::HOGDescriptor gpu_hog(win_size, Size(16, 16), Size(8, 8), Size(8, 8), 9,
cv::gpu::HOGDescriptor::DEFAULT_WIN_SIGMA, 0.2, gamma_corr,
cv::gpu::HOGDescriptor::DEFAULT_NLEVELS);
gpu_img.upload(img);
gpu_hog.getDescriptors(gpu_img, win_stride, descriptors, cv::gpu::HOGDescriptor::DESCR_FORMAT_ROW_BY_ROW);
descriptors.download(temp);
OpenCV 3 provides some changes to the way GPU algorithms (i.e. CUDA) can be used by the user, see the Transition Guide - CUDA.
To update the answer from user3398689 to OpenCV 3, here is a snipped code:
#include <opencv2/core/cuda.hpp>
#include <opencv2/cudaimgproc.hpp>
[...]
/* Suppose you load an image in a cv::Mat variable called 'src' */
int img_width = 320;
int img_height = 240;
int block_size = 16;
int bin_number = 9;
cv::Ptr<cv::cuda::HOG> cuda_hog = cuda::HOG::create(Size(img_width, img_height),
Size(block_size, block_size),
Size(block_size/2, block_size/2),
Size(block_size/2, block_size/2),
bin_number);
/* The following commands are optional: default values applies */
cuda_hog->setDescriptorFormat(cuda::HOG::DESCR_FORMAT_COL_BY_COL);
cuda_hog->setGammaCorrection(true);
cuda_hog->setWinStride(Size(img_width_, img_height_));
cv::cuda::GpuMat image;
cv::cuda::GpuMat descriptor;
image.upload(src);
/* May not apply to you */
/* CUDA HOG works with intensity (1 channel) or BGRA (4 channels) images */
/* The next function call convert a standard BGR image to BGRA using the GPU */
cv::cuda::GpuMat image_alpha;
cuda::cvtColor(image, image_alpha, COLOR_BGR2BGRA, 4);
cuda_hog->compute(image_alpha, descriptor);
cv::Mat dst;
image_alpha.download(dst);
You can then use the descriptors in 'dst' variable as you prefer like, e.g., as suggested by G453.

Resources