partition a set of images into k clusters with opencv - opencv

I have an image data set that I would like to partition into k clusters. I am trying to use the opencv implementation of k-means clustering.
Firstly, I store my Mat images into a vector of Mat and then I am trying to use the kmeans function. However, I am getting an assertion error.
Should the images be stored into a different kind of structure? I have read the k-means documentation and I dont seem to understand what I am doing wrong. This is my code:
Thank you in advance,
vector <Mat> images;
string folder = "D:\\football\\positive_clustering\\";
string mask = "*.bmp";
vector<string> files = getFileList(folder + mask);
for (int i = 0; i < files.size(); i++)
{
Mat img = imread(folder + files[i]);
images.push_back(img);
}
cout << "Vector of positive samples created" << endl;
int k = 10;
cv::Mat bestLabels;
cv::kmeans(images, k, bestLabels, TermCriteria(), 3, KMEANS_PP_CENTERS);
//have a look
vector<cv::Mat> clusterViz(bestLabels.rows);
for (int i = 0; i<bestLabels.rows; i++)
{
clusterViz[bestLabels.at<int>(i)].push_back(cv::Mat(images[bestLabels.at<int>(i)]));
}
namedWindow("clusters", WINDOW_NORMAL);
for (int i = 0; i<clusterViz.size(); i++)
{
cv::imshow("clusters", clusterViz[i]);
cv::waitKey();
}

Related

OpenCV SVM prediction inconsistent?

This question has been asked here, but still no answer/solution. My problem is this: I trained an SVM (with RBF kernel) for smoke detection, using the RGB histogram distribution (in 8 bins - so M = 24) of the smoke:
cv::Mat labelsMat = cv::Mat(N, 1, CV_32SC1);
for (int i = 0; i < N; i++)
{
labelsMat.at<int>(i, 0) = labels[i];
}
cv::Mat trainingDataMat = cv::Mat(N, M, CV_32FC1);
for (int i = 0; i < N; i++)
{
for (int j = 0; j < M; j++)
{
trainingDataMat.at<float>(i, j) = histogramData[i][j];
}
}
// Create the SVM
cv::Ptr<ml::SVM> svm = ml::SVM::create();
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::RBF);
svm->setTermCriteria(cv::TermCriteria(cv::TermCriteria::MAX_ITER, 1000, 1e-8));
// Train the SVM
svm->trainAuto(trainingDataMat, ml::ROW_SAMPLE, labelsMat);
svm->save(SVMFileName);
Then I saved the SVM model in a file. For the detection, after loading the SVM model:
svm = cv::ml::SVM::load(SVMFile);
I proceeded with the smoke detection; in this case to decide for each detected blob in a frame whether it's smoke or not:
for (int i = 0; i < 8; i++)
histogramData.at<float>(0, i) = Rhist[i];
for (int i = 8; i < 16; i++)
histogramData.at<float>(0, i) = Ghist[i];
for (int i = 16; i < 24; i++)
histogramData.at<float>(0, i) = Bhist[i];
float response = svm->predict(histogramData);
The frames where detection (true/false positive) occurs are saved, with the frame no. When I run this on the same video several times, each time different results (frame no.) will be produced (the blob detection always produces the same blobs). Regarding the detection, sometimes (most of the time) the smoke will be detected, but there are some cases where the same smoke will not be detected (the same video).
Anybody has any idea how to resolve this? Or is this still a known problem in OpenCV SVM?
Just realized my stupid error in the code: the indexing of Ghist & Bhist to form the data for prediction is totally incorrect, hence the inconsistencies!

OpenCV - get coordinates of top of object in contour

Given a contour such as the one seen below, is there a way to get the X,Y coordinates of the top point in the contour? I'm using Python, but other language examples are fine.
Since every pixel needs to be checked, I'm afraid you will have to iterate linewise over the image and see which is the first white pixel.
You can iterate over the image until you encounter a pixel that isn't black.
I will write an example in C++.
cv::Mat image; // your binary image with type CV_8UC1 (8-bit 1-channel image)
int topRow(-1), topCol(-1);
for(int i = 0; i < image.rows; i++) {
uchar* ptr = image.ptr<uchar>(i);
for(int j = 0; j < image.cols; j++) {
if(ptr[j] != 0) {
topRow = i;
topCol = j;
std::cout << "Top point: " << i << ", " << j << std::endl;
break;
}
}
if(topRow != -1)
break;
}

Filling and reading elements of Mat in opencv

I have this simple question.I want to create a normal 2D Matrix to use as bin to put integers, and increment some elements, but it doesn't work,why?it just prints some unknown symboes.
here is my code
Mat img = imread("img.jpg", 0);
Mat bin = Mat::zeros(img.size(),CV_8U);//also tried 8UC1
for (size_t i = 0; i < 100; i++)
{
bin.at<uchar>(i, 50) = 200;
cout << bin.at<uchar>(i, 50) << endl;
//(bin.at<uchar>(i,50))++;//if above statement works then I will use this incrementer
}

cv::SVM response one class for every sample

I am new in Match faces , I am trying to learn how to use SVM with HOG descriptors.
I wrote a simple face recognizer with SVM, but when i activate it , code always returns 1
float *getHOG(const cv::Mat &image, int* count)//Compute HOG
{
cv::HOGDescriptor hog;
std::vector<float> res;
cv::Mat img2;
cv::resize(image, img2, cv::Size(64, 128));
hog.compute(img2, res, cv::Size(8, 8), cv::Size(0, 0));
*count = res.size();
float* result = new float[*count];
for(int i = 0; i < res.size(); i++)
{
result[i] = res[i];
}
return result;
}
const int dataSetLength = 10;
float **getTraininigData(int* setlen, int* veclen)//Load some samples of data
{
char *names[dataSetLength] = {
"../faces/s1/1.pgm",
"../faces/s1/2.pgm",
"../faces/s1/3.pgm",
"../faces/s1/4.pgm",
"../faces/s1/5.pgm",
"../faces/cars/1.jpg",
"../faces/cars/2.jpg",
"../faces/cars/3.jpg",
"../faces/cars/4.jpg",
"../faces/cars/5.jpg",
};
float **res = new float* [dataSetLength];
for(int i = 0; i < dataSetLength; i++)
{
std::cout<<names[i]<<"\n";
cv::Mat img = cv::imread(names[i], 0);
res[i] = getHOG(img, veclen);
}
*setlen = dataSetLength;
return res;
}
void test()//Training and activate SVM
{
int setlen, veclen;
float **trainingData = getTraininigData(&setlen, &veclen);
float *labels = new float[dataSetLength];
for(int i = 0; i < dataSetLength; i++)
{
labels[i] = (i < dataSetLength/2)? 0.0 : 1.0;
}
cv::Mat labelsMat(setlen, 1, CV_32FC1, labels);
cv::Mat trainingDataMat(setlen, veclen, CV_32FC1, trainingData);
cv::SVMParams params;
params.svm_type = cv::SVM::C_SVC;
params.kernel_type = cv::SVM::LINEAR;
params.term_crit = cv::TermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
std::cout<<labelsMat<<"\n";
cv::SVM SVM;
SVM.train(trainingDataMat, labelsMat, cv::Mat(), cv::Mat(), params);
cv::Mat img = cv::imread("../faces/s1/2.pgm", 0);//sample from train data, but ansewer is 1 for every sample
auto desc = getHOG(img, &veclen);
cv::Mat sampleMat(1, veclen, CV_32FC1, desc);
float response = SVM.predict(sampleMat);
std::cout<<"resp "<< response<<"\n";
}
What wrong with my code ?
PS sorry for my writing mistakes. English in not my native language
You don't have much training data. Note how Dalal and Triggs in their original paper on HOG (http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf) used thousands of examples to train the SVM, you have just 5 negative and 5 positive.
You haven't set the C parameter (you need to find a good value via cross validation) - you will need more data.
Possibly HOG descriptors for faces and cars are not separable with a linear kernel, try RBF.
But this is unlikely to be an issue since D&L use a linear SVM in their paper.
Read this: http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
If you haven't done this yet, get the SVM working for a simpler case (e.g. just use image patches instead of HOG).

template initialization of opencv Mat from Vector

I'm writing a function with some lines to convert from a 2d STL vector to OpenCV Mat. Since, OpenCV supports Mat initialization from vector with Mat(vector). But this time, I try a 2D vector and not successful.
the function is simple like:
template <class NumType>
Mat Vect2Mat(vector<vector<NumType>> vect)
{
Mat mtx = Mat(vect.size(), vect[0].size(), CV_64F, 0); // don't need to init??
//Mat mtx;
// copy data
for (int i=0; i<vect.size(); i++)
for (int j=0; j<vect[i].size(); j++)
{
mtx.at<NumType>(i,j) = vect[i][j];
//cout << vect[i][j] << " ";
}
return mtx;
}
So is there a way to initalize Mat mtx accordingly with NumType?? the syntax is always fixed with CV_32F, CV_64F, .... and therefore, very restricted
Thank you!
I think I've found the answer which is given from OpenCV documentation. They call the technique "Class Trait" by the use of DataType class.
it's like:
Mat mtx = Mat::zeros(vect.size(), vect[0].size(), DataType<NumType>::type);
For example:
template <class NumType>
cv::Mat Vect2Mat(std::vector<std::vector<NumType>> vect)
{
cv::Mat mtx = cv::Mat::zeros(vect.size(), vect[0].size(), cv::DataType<NumType>::type);
//Mat mtx;
// copy data
for (int i=0; i<vect.size(); i++)
for (int j=0; j<vect[i].size(); j++)
{
mtx.at<NumType>(i,j) = vect[i][j];
//cout << vect[i][j] << " ";
}
return mtx;
}

Resources