This question is specific to opencv:
The kmeans example given in the opencv documentation has a 2-channel matrix - one channel for each dimension of the feature vector. But, some of the other example seem to say that it should be a one channel matrix with features along the columns with one row for each sample. Which of these is right?
if I have a 5 dimensional feature vector, what should be the input matrix that I use:
This one:
cv::Mat inputSamples(numSamples, 1, CV32FC(numFeatures))
or this one:
cv::Mat inputSamples(numSamples, numFeatures, CV_32F)
The correct answer is cv::Mat inputSamples(numSamples, numFeatures, CV_32F).
The OpenCV Documentation about kmeans says:
samples – Floating-point matrix of input samples, one row per sample
So it is not a Floating-point vector of n-Dimensional floats as in the other option. Which examples suggested such a behaviour?
Here is also a small example by me that shows how kmeans can be used. It clusters the pixels of an image and displays the result:
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
using namespace cv;
int main( int argc, char** argv )
{
Mat src = imread( argv[1], 1 );
Mat samples(src.rows * src.cols, 3, CV_32F);
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
for( int z = 0; z < 3; z++)
samples.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y,x)[z];
int clusterCount = 15;
Mat labels;
int attempts = 5;
Mat centers;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10000, 0.0001), attempts, KMEANS_PP_CENTERS, centers );
Mat new_image( src.size(), src.type() );
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*src.rows,0);
new_image.at<Vec3b>(y,x)[0] = centers.at<float>(cluster_idx, 0);
new_image.at<Vec3b>(y,x)[1] = centers.at<float>(cluster_idx, 1);
new_image.at<Vec3b>(y,x)[2] = centers.at<float>(cluster_idx, 2);
}
imshow( "clustered image", new_image );
waitKey( 0 );
}
As alternative to reshaping the input matrix manually, you can use OpenCV reshape function to achieve similar result with less code. Here is my working implementation of reducing colors count with K-Means method (in Java):
private final static int MAX_ITER = 10;
private final static int CLUSTERS = 16;
public static Mat colorMapKMeans(Mat img, int K, int maxIterations) {
Mat m = img.reshape(1, img.rows() * img.cols());
m.convertTo(m, CvType.CV_32F);
Mat bestLabels = new Mat(m.rows(), 1, CvType.CV_8U);
Mat centroids = new Mat(K, 1, CvType.CV_32F);
Core.kmeans(m, K, bestLabels,
new TermCriteria(TermCriteria.COUNT | TermCriteria.EPS, maxIterations, 1E-5),
1, Core.KMEANS_RANDOM_CENTERS, centroids);
List<Integer> idx = new ArrayList<>(m.rows());
Converters.Mat_to_vector_int(bestLabels, idx);
Mat imgMapped = new Mat(m.size(), m.type());
for(int i = 0; i < idx.size(); i++) {
Mat row = imgMapped.row(i);
centroids.row(idx.get(i)).copyTo(row);
}
return imgMapped.reshape(3, img.rows());
}
public static void main(String[] args) {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Highgui.imwrite("result.png",
colorMapKMeans(Highgui.imread(args[0], Highgui.CV_LOAD_IMAGE_COLOR),
CLUSTERS, MAX_ITER));
}
OpenCV reads image into 2 dimensional, 3 channel matrix. First call to reshape - img.reshape(1, img.rows() * img.cols()); - essentially unrolls 3 channels into columns. In resulting matrix one row corresponds to one pixel of the input image, and 3 columns corresponds to RGB components.
After K-Means algorithm finished its work, and color mapping has been applied, we call reshape again - imgMapped.reshape(3, img.rows()), but now rolling columns back into channels, and reducing row numbers to the original image row number, thus getting back the original matrix format, but only with reduced colors.
Related
I would like to find the median color in a masked area in OpevCV. Does OpenCV have a function that takes an image and a mask, and puts only the pixels from the image where mask != 0 into an array or Mat?
I don't know of any OpenCV function that creates a vector from masked values, I have written my own function to do that in the past, which you could do.
Alternatively you could calculate the histogram and find the median off of that, if your data is uint8.
You should use the following function of the Mat class to copy all the pixels into another Mat by using Mask:
Mat rst;
img.copyTo(rst, mask);
Post is quite old now, but - as there is still no function available in OpenCV - I implemented it for my app. Maybe will be useful for anyone...
cv::Mat extractMaskedData(cv::Mat data, cv::Mat mask)
{
CV_Assert(mask.size()==data.size());
CV_Assert(mask.type()==CV_8UC1);
const bool isContinuous = data.isContinuous() && mask.isContinuous();
const int nRows = isContinuous ? 1 : data.rows;
const int nCols = isContinuous ? data.rows * data.cols : data.cols;
const size_t pixelBitsize = data.channels() * (data.depth() < 2 ? 1 : data.depth() < 4 ? 2 : data.depth() < 6 ? 4 : 8);
cv::Mat extractedData(0, 1, data.type());
uint8_t* m;
uint8_t* d;
for (size_t i = 0; i < nRows; ++i) {
m = mask.ptr<uint8_t>(i);
d = data.ptr(i);
for (size_t j = 0; j < nCols; ++j) {
if(m[j]) {
const cv::Mat pixelData(1, 1, data.type(), d + j * pixelBitsize);
extractedData.push_back(pixelData);
}
}
}
return extractedData;
}
It returns cv::Mat(1,n,data.type()) where n is the number of non-zero elements in mask.
May be optimised by using image-type-specific d pointer (e.g. cv::Vec3f for CV_32FC3 instead of generic uint8_t* d together with const cv::Mat pixelData(1, 1, data.type(), d + j * pixelBitsize);.
As far as I know the built-in split will split one 3-channel Mat into three 1-channel Mat. As a result, those three Mat are just gray scale with some different intensities.
My intent is to get three 3-channel Mat as follows.
void splitTo8UC3(const Mat& input, vector<Mat>& output)
{
Mat blue = input.clone();
Mat green = input.clone();
Mat red = input.clone();
const uint N = input.rows * input.step;
for (uint i = 0; i < N; i += 3)
{
// blue.data[i]
green.data[i] = 0;
red.data[i] = 0;
blue.data[i + 1] = 0;
//green.data[i+1]
red.data[i + 1] = 0;
blue.data[i + 2] = 0;
green.data[i + 2] = 0;
//red.data[i+2]
}
output.push_back(blue);
output.push_back(green);
output.push_back(red);
}
It works but instead of reinventing the wheel, I am looking for the built-in if any.
Edit
The proposed solution must be faster than mine.
EDIT: I incorporated Dan's suggested improvements from his comment.
I can't think of a built-in function exactly doing this, and I also couldn't find one. But while doing some research, I came across the mixChannels function, which might improve your solution. At least, it avoids implementing a loop.
Here are my modifications to your code:
void splitTo8UC3(const cv::Mat& input, std::vector<cv::Mat>& output)
{
// Allocate outputs
cv::Mat b(cv::Mat::zeros(input.size(), input.type()));
cv::Mat g(cv::Mat::zeros(input.size(), input.type()));
cv::Mat r(cv::Mat::zeros(input.size(), input.type()));
// Collect outputs
cv::Mat out[] = { b, g, r };
// Set up index pairs
int from_to[] = { 0,0, 1,4, 2,8 };
cv::mixChannels(&input, 1, out, 3, from_to, 3);
output.assign(std::begin(out), std::end(out));
}
Let's have this test image colors.png:
And, let's have this test code:
cv::Mat img = cv::imread("images/colors.png");
std::vector<cv::Mat> bgr;
splitTo8UC3(img, bgr);
cv::imwrite("images/b.png", bgr[0]);
cv::imwrite("images/g.png", bgr[1]);
cv::imwrite("images/r.png", bgr[2]);
Then, we get the following outputs b.png, g.png, and r.png, which hopefully are the them as for your initial solution:
Hope that helps!
I'm using KNN to classify images. Now my problem is how to draw the results.
Click here to get the documentation for KNN in OpenCV
I'm using the function find_nearest, which constructor looks like this:
C++: float CvKNearest::find_nearest(const Mat& samples, int k, Mat& results, Mat& neighborResponses, Mat& dists)
Where the parameters are:
samples : Input samples stored by rows. It is a single-precision floating-point matrix of number\_of\_samples \times number\_of\_features size.
k : Number of used nearest neighbors. It must satisfy constraint: k \le CvKNearest::get_max_k().
results : Vector with results of prediction (regression or classification) for each input sample. It is a single-precision floating-point vector with number_of_samples elements.
neighbors : Optional output pointers to the neighbor vectors themselves. It is an array of k*samples->rows pointers.
neighborResponses : Optional output values for corresponding neighbors. It is a single-precision floating-point matrix of number\_of\_samples \times k size.
dist : Optional output distances from the input vectors to the corresponding neighbors. It is a single-precision floating-point matrix of number\_of\_samples \times k size.
A posible implementation would look like this:
#include "ml.h"
#include "highgui.h"
int main( int argc, char** argv )
{
const int K = 10;
int i, j, k, accuracy;
float response;
int train_sample_count = 100;
CvRNG rng_state = cvRNG(-1);
CvMat* trainData = cvCreateMat( train_sample_count, 2, CV_32FC1 );
CvMat* trainClasses = cvCreateMat( train_sample_count, 1, CV_32FC1 );
IplImage* img = cvCreateImage( cvSize( 500, 500 ), 8, 3 );
float _sample[2];
CvMat sample = cvMat( 1, 2, CV_32FC1, _sample );
cvZero( img );
CvMat trainData1, trainData2, trainClasses1, trainClasses2;
// form the training samples
cvGetRows( trainData, &trainData1, 0, train_sample_count/2 );
cvRandArr( &rng_state, &trainData1, CV_RAND_NORMAL, cvScalar(200,200), cvScalar(50,50) );
cvGetRows( trainData, &trainData2, train_sample_count/2, train_sample_count );
cvRandArr( &rng_state, &trainData2, CV_RAND_NORMAL, cvScalar(300,300), cvScalar(50,50) );
cvGetRows( trainClasses, &trainClasses1, 0, train_sample_count/2 );
cvSet( &trainClasses1, cvScalar(1) );
cvGetRows( trainClasses, &trainClasses2, train_sample_count/2, train_sample_count );
cvSet( &trainClasses2, cvScalar(2) );
// learn classifier
CvKNearest knn( trainData, trainClasses, 0, false, K );
CvMat* nearests = cvCreateMat( 1, K, CV_32FC1);
for( i = 0; i < img->height; i++ )
{
for( j = 0; j < img->width; j++ )
{
sample.data.fl[0] = (float)j;
sample.data.fl[1] = (float)i;
// estimate the response and get the neighbors' labels
response = knn.find_nearest(&sample,K,0,0,nearests,0);
// compute the number of neighbors representing the majority
for( k = 0, accuracy = 0; k < K; k++ )
{
if( nearests->data.fl[k] == response)
accuracy++;
}
}
}
Now back to the problem. I want to use the function DrawMatches. Click here to see the description. This function expects its input as DMatch-Type matrix. So as you see Knn.find_nearest does not give me any return of this type. Do you have any suggestion how to convert those?
Thanks in advance!
as input data I have a 24 bit RGB image and a palette with 2..20 fixed colours. These colours are in no way spread regularly over the full colour range.
Now I have to modify the colours of input image so that only the colours of the given palette are used - using the colour out of the palette that is closest to the original colour (not closest mathematically but for human's visual impression). So what I need is an algorithm that uses an input colour and finds the colour in target palette that visually fits best to this colour. Please note: I'm not looking for a stupid comparison/difference algorithm but for something that really incorporates the impression a colour has on humans!
Since this is something that already should have been done and because I do not want to re-invent the wheel again: is there some example source code out there that does this job? In best case it is really a piece of code and not a link to a desastrous huge library ;-)
(I'd guess OpenCV does not provide such a function?)
Thanks
You should look at the Lab color space. It was designed so that the distance in the colour space equals the perceptual distance. So once you have converted your image you can compute the distances as you would have done earlier, but should get a better result from a perceptual point of view. In OpenCV you can use the cvtColor(source, destination, CV_BGR2Lab) function.
Another Idea would be to use dithering. The idea is to mix missing colours using neighbouring pixels. A popular algorithm for this is Floyd-Steinberg dithering.
Here is an example of mine, where I combined a optimized palette using k-means with the Lab colourspace and floyd steinberg dithering:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
cv::Mat floydSteinberg(cv::Mat img, cv::Mat palette);
cv::Vec3b findClosestPaletteColor(cv::Vec3b color, cv::Mat palette);
int main(int argc, char** argv)
{
// Number of clusters (colors on result image)
int nrColors = 18;
cv::Mat imgBGR = imread(argv[1],1);
cv::Mat img;
cvtColor(imgBGR, img, CV_BGR2Lab);
cv::Mat colVec = img.reshape(1, img.rows*img.cols); // change to a Nx3 column vector
cv::Mat colVecD;
colVec.convertTo(colVecD, CV_32FC3, 1.0); // convert to floating point
cv::Mat labels, centers;
cv::kmeans(colVecD, nrColors, labels,
cv::TermCriteria(CV_TERMCRIT_ITER, 100, 0.1),
3, cv::KMEANS_PP_CENTERS, centers); // compute k mean centers
// replace pixels by there corresponding image centers
cv::Mat imgPosterized = img.clone();
for(int i = 0; i < img.rows; i++ )
for(int j = 0; j < img.cols; j++ )
for(int k = 0; k < 3; k++)
imgPosterized.at<Vec3b>(i,j)[k] = centers.at<float>(labels.at<int>(j+img.cols*i),k);
// convert palette back to uchar
cv::Mat palette;
centers.convertTo(palette,CV_8UC3,1.0);
// call floyd steinberg dithering algorithm
cv::Mat fs = floydSteinberg(img, palette);
cv::Mat imgPosterizedBGR, fsBGR;
cvtColor(imgPosterized, imgPosterizedBGR, CV_Lab2BGR);
cvtColor(fs, fsBGR, CV_Lab2BGR);
imshow("input",imgBGR); // original image
imshow("result",imgPosterizedBGR); // posterized image
imshow("fs",fsBGR); // floyd steinberg dithering
waitKey();
return 0;
}
cv::Mat floydSteinberg(cv::Mat imgOrig, cv::Mat palette)
{
cv::Mat img = imgOrig.clone();
cv::Mat resImg = img.clone();
for(int i = 0; i < img.rows; i++ )
for(int j = 0; j < img.cols; j++ )
{
cv::Vec3b newpixel = findClosestPaletteColor(img.at<Vec3b>(i,j), palette);
resImg.at<Vec3b>(i,j) = newpixel;
for(int k=0;k<3;k++)
{
int quant_error = (int)img.at<Vec3b>(i,j)[k] - newpixel[k];
if(i+1<img.rows)
img.at<Vec3b>(i+1,j)[k] = min(255,max(0,(int)img.at<Vec3b>(i+1,j)[k] + (7 * quant_error) / 16));
if(i-1 > 0 && j+1 < img.cols)
img.at<Vec3b>(i-1,j+1)[k] = min(255,max(0,(int)img.at<Vec3b>(i-1,j+1)[k] + (3 * quant_error) / 16));
if(j+1 < img.cols)
img.at<Vec3b>(i,j+1)[k] = min(255,max(0,(int)img.at<Vec3b>(i,j+1)[k] + (5 * quant_error) / 16));
if(i+1 < img.rows && j+1 < img.cols)
img.at<Vec3b>(i+1,j+1)[k] = min(255,max(0,(int)img.at<Vec3b>(i+1,j+1)[k] + (1 * quant_error) / 16));
}
}
return resImg;
}
float vec3bDist(cv::Vec3b a, cv::Vec3b b)
{
return sqrt( pow((float)a[0]-b[0],2) + pow((float)a[1]-b[1],2) + pow((float)a[2]-b[2],2) );
}
cv::Vec3b findClosestPaletteColor(cv::Vec3b color, cv::Mat palette)
{
int i=0;
int minI = 0;
cv::Vec3b diff = color - palette.at<Vec3b>(0);
float minDistance = vec3bDist(color, palette.at<Vec3b>(0));
for (int i=0;i<palette.rows;i++)
{
float distance = vec3bDist(color, palette.at<Vec3b>(i));
if (distance < minDistance)
{
minDistance = distance;
minI = i;
}
}
return palette.at<Vec3b>(minI);
}
Try this algorithm (it will reduct color number, but it compute palette by itself):
#include <opencv2/opencv.hpp>
#include "opencv2/legacy/legacy.hpp"
#include <vector>
#include <list>
#include <iostream>
using namespace cv;
using namespace std;
void main(void)
{
// Number of clusters (colors on result image)
int NrGMMComponents = 32;
// Source file name
string fname="D:\\ImagesForTest\\tools.jpg";
cv::Mat SampleImg = imread(fname,1);
//cv::GaussianBlur(SampleImg,SampleImg,Size(5,5),3);
int SampleImgHeight = SampleImg.rows;
int SampleImgWidth = SampleImg.cols;
// Pick datapoints
vector<Vec3d> ListSamplePoints;
for (int y=0; y<SampleImgHeight; y++)
{
for (int x=0; x<SampleImgWidth; x++)
{
// Get pixel color at that position
Vec3b bgrPixel = SampleImg.at<Vec3b>(y, x);
uchar b = bgrPixel.val[0];
uchar g = bgrPixel.val[1];
uchar r = bgrPixel.val[2];
if(rand()%25==0) // Pick not every, bu t every 25-th
{
ListSamplePoints.push_back(Vec3d(b,g,r));
}
} // for (x)
} // for (y)
// Form training matrix
Mat labels;
int NrSamples = ListSamplePoints.size();
Mat samples( NrSamples, 3, CV_32FC1 );
for (int s=0; s<NrSamples; s++)
{
Vec3d v = ListSamplePoints.at(s);
samples.at<float>(s,0) = (float) v[0];
samples.at<float>(s,1) = (float) v[1];
samples.at<float>(s,2) = (float) v[2];
}
cout << "Learning to represent the sample distributions with" << NrGMMComponents << "gaussians." << endl;
// Algorithm parameters
CvEMParams params;
params.covs = NULL;
params.means = NULL;
params.weights = NULL;
params.probs = NULL;
params.nclusters = NrGMMComponents;
params.cov_mat_type = CvEM::COV_MAT_GENERIC; // DIAGONAL, GENERIC, SPHERICAL
params.start_step = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 1500;
params.term_crit.epsilon = 0.001;
params.term_crit.type = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;
//params.term_crit.type = CV_TERMCRIT_ITER;
// Train
cout << "Started GMM training" << endl;
CvEM em_model;
em_model.train( samples, Mat(), params, &labels );
cout << "Finished GMM training" << endl;
// Result image
Mat img = Mat::zeros( Size( SampleImgWidth, SampleImgHeight ), CV_8UC3 );
// Ask classifier for each pixel
Mat sample( 1, 3, CV_32FC1 );
Mat means;
means=em_model.getMeans();
for(int i = 0; i < img.rows; i++ )
{
for(int j = 0; j < img.cols; j++ )
{
Vec3b v=SampleImg.at<Vec3b>(i,j);
sample.at<float>(0,0) = (float) v[0];
sample.at<float>(0,1) = (float) v[1];
sample.at<float>(0,2) = (float) v[2];
int response = cvRound(em_model.predict( sample ));
img.at<Vec3b>(i,j)[0]=means.at<double>(response,0);
img.at<Vec3b>(i,j)[1]=means.at<double>(response,1);
img.at<Vec3b>(i,j)[2]=means.at<double>(response,2);
}
}
img.convertTo(img,CV_8UC3);
imshow("result",img);
waitKey();
// Save the result
cv::imwrite("result.png", img);
}
PS: For perceptive color distance measurement it's better to use L*a*b color space. There is converter in opencv for this purpose. For clustering you can use k-means with defined cluster centers (your palette entries). After clustering you'll get points with indexes of palette intries.
I am new in Match faces , I am trying to learn how to use SVM with HOG descriptors.
I wrote a simple face recognizer with SVM, but when i activate it , code always returns 1
float *getHOG(const cv::Mat &image, int* count)//Compute HOG
{
cv::HOGDescriptor hog;
std::vector<float> res;
cv::Mat img2;
cv::resize(image, img2, cv::Size(64, 128));
hog.compute(img2, res, cv::Size(8, 8), cv::Size(0, 0));
*count = res.size();
float* result = new float[*count];
for(int i = 0; i < res.size(); i++)
{
result[i] = res[i];
}
return result;
}
const int dataSetLength = 10;
float **getTraininigData(int* setlen, int* veclen)//Load some samples of data
{
char *names[dataSetLength] = {
"../faces/s1/1.pgm",
"../faces/s1/2.pgm",
"../faces/s1/3.pgm",
"../faces/s1/4.pgm",
"../faces/s1/5.pgm",
"../faces/cars/1.jpg",
"../faces/cars/2.jpg",
"../faces/cars/3.jpg",
"../faces/cars/4.jpg",
"../faces/cars/5.jpg",
};
float **res = new float* [dataSetLength];
for(int i = 0; i < dataSetLength; i++)
{
std::cout<<names[i]<<"\n";
cv::Mat img = cv::imread(names[i], 0);
res[i] = getHOG(img, veclen);
}
*setlen = dataSetLength;
return res;
}
void test()//Training and activate SVM
{
int setlen, veclen;
float **trainingData = getTraininigData(&setlen, &veclen);
float *labels = new float[dataSetLength];
for(int i = 0; i < dataSetLength; i++)
{
labels[i] = (i < dataSetLength/2)? 0.0 : 1.0;
}
cv::Mat labelsMat(setlen, 1, CV_32FC1, labels);
cv::Mat trainingDataMat(setlen, veclen, CV_32FC1, trainingData);
cv::SVMParams params;
params.svm_type = cv::SVM::C_SVC;
params.kernel_type = cv::SVM::LINEAR;
params.term_crit = cv::TermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);
std::cout<<labelsMat<<"\n";
cv::SVM SVM;
SVM.train(trainingDataMat, labelsMat, cv::Mat(), cv::Mat(), params);
cv::Mat img = cv::imread("../faces/s1/2.pgm", 0);//sample from train data, but ansewer is 1 for every sample
auto desc = getHOG(img, &veclen);
cv::Mat sampleMat(1, veclen, CV_32FC1, desc);
float response = SVM.predict(sampleMat);
std::cout<<"resp "<< response<<"\n";
}
What wrong with my code ?
PS sorry for my writing mistakes. English in not my native language
You don't have much training data. Note how Dalal and Triggs in their original paper on HOG (http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf) used thousands of examples to train the SVM, you have just 5 negative and 5 positive.
You haven't set the C parameter (you need to find a good value via cross validation) - you will need more data.
Possibly HOG descriptors for faces and cars are not separable with a linear kernel, try RBF.
But this is unlikely to be an issue since D&L use a linear SVM in their paper.
Read this: http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf
If you haven't done this yet, get the SVM working for a simpler case (e.g. just use image patches instead of HOG).