I use predict() in my program, the flowing is my codes:
int plateJudge(vector<Mat>& inVec,vector<Mat>& resultVec){
size_t num = inVec.size();
for (int j = 0; j < num; j++)
{
Mat inMat = inVec[j];
Mat gray;
cvtColor(inMat,gray,COLOR_BGR2GRAY);
equalizeHist(gray,gray);
Mat p = gray.reshape(1, 1);
p.convertTo(p, CV_32FC1);
int response = (int)svm->predict(p);
if (response == 1)
{
resultVec.push_back(inMat);
}
}
return 0;
}
but I got the error:
error: (-215) samples.cols == var_count && samples.type() == 5 in function predict
I have translate the image to gray format & change the array as 1*n, it still didn't work. Besides,the svm has already been defined well(using the trained model).So, that's all. Wish for answers! Thanks a lot.
You should pass same dimension matrix that you may had passed during training.
Related
I have an image data set that I would like to partition into k clusters. I am trying to use the opencv implementation of k-means clustering.
Firstly, I store my Mat images into a vector of Mat and then I am trying to use the kmeans function. However, I am getting an assertion error.
Should the images be stored into a different kind of structure? I have read the k-means documentation and I dont seem to understand what I am doing wrong. This is my code:
Thank you in advance,
vector <Mat> images;
string folder = "D:\\football\\positive_clustering\\";
string mask = "*.bmp";
vector<string> files = getFileList(folder + mask);
for (int i = 0; i < files.size(); i++)
{
Mat img = imread(folder + files[i]);
images.push_back(img);
}
cout << "Vector of positive samples created" << endl;
int k = 10;
cv::Mat bestLabels;
cv::kmeans(images, k, bestLabels, TermCriteria(), 3, KMEANS_PP_CENTERS);
//have a look
vector<cv::Mat> clusterViz(bestLabels.rows);
for (int i = 0; i<bestLabels.rows; i++)
{
clusterViz[bestLabels.at<int>(i)].push_back(cv::Mat(images[bestLabels.at<int>(i)]));
}
namedWindow("clusters", WINDOW_NORMAL);
for (int i = 0; i<clusterViz.size(); i++)
{
cv::imshow("clusters", clusterViz[i]);
cv::waitKey();
}
Given a contour such as the one seen below, is there a way to get the X,Y coordinates of the top point in the contour? I'm using Python, but other language examples are fine.
Since every pixel needs to be checked, I'm afraid you will have to iterate linewise over the image and see which is the first white pixel.
You can iterate over the image until you encounter a pixel that isn't black.
I will write an example in C++.
cv::Mat image; // your binary image with type CV_8UC1 (8-bit 1-channel image)
int topRow(-1), topCol(-1);
for(int i = 0; i < image.rows; i++) {
uchar* ptr = image.ptr<uchar>(i);
for(int j = 0; j < image.cols; j++) {
if(ptr[j] != 0) {
topRow = i;
topCol = j;
std::cout << "Top point: " << i << ", " << j << std::endl;
break;
}
}
if(topRow != -1)
break;
}
what does cascade->flag and cascade->count signify
I want to use this trained haar classifier to detect face , how can I use without using opencv ready made function .
Here is my function applying cvHaarDetectObjects using cascade from opencv. I use cv::resize to perform detection on small image (faster) and cv::equalizeHist.
Flags is the signature and count is the number of stages in the cascade.
You should read the opencv documentation.
cv::Rect Detect(const cv::Mat & img){
assert(img.type()==CV_8U);
// rectangle result
cv::Rect rdet;
// small img size
int w = img.cols/img_scale_;
int h = img.rows/img_scale_;
if((small_img_.rows!=h) || (small_img_.cols!=w))small_img_.create(h,w,CV_8U);
// grayscale img
cv::Mat gray;
if(img.channels() == 1)gray = img;
else{
gray=cv::Mat(img.rows,img.cols,CV_8U);
cv::cvtColor(img,gray,CV_BGR2GRAY);
}
cv::resize(gray,small_img_,cv::Size(w,h),0,0,CV_INTER_LINEAR);
cv::equalizeHist(small_img_,small_img_);
// perform detection
cvClearMemStorage(storage_);
IplImage ipl_simg = small_img_;
CvSeq* obj = cvHaarDetectObjects(&ipl_simg,cascade_,storage_,
scale_factor_,min_neighbors_,0,min_size_);
if(obj->total == 0){
return cv::Rect(0,0,0,0);
}
int maxv = 0;
for(int i = 0; i < obj->total; i++){
CvRect* r = (CvRect*)cvGetSeqElem(obj,i);
if(i == 0 || maxv < r->width*r->height){
maxv = r->width*r->height;
rdet.x = r->x*img_scale_;
rdet.y = r->y*img_scale_;
rdet.width = r->width*img_scale_;
rdet.height = r->height*img_scale_;
}
}
cvRelease((void**)(&obj));
return rdet;
}
I am trying to upsample an Image using Bicubic Interpoloation, I need the accurate values matching the cvResize() function of opencv, but the results of following code is not matching the results from cvResize(), can you take a look and help me to fix the bug.
Image *Image::resize_using_Bicubic(int w, int h) {
float dx,dy;
float x,y;
float tx, ty;
float i,j,m,n;
Image *result=new Image(w, h);
tx = (float)this->m_width/(float)w;
ty = (float)this->m_height/(float)h;
for(i=0; i< w; i++)
{
for(j=0; j< h; j++)
{
x = i*tx;
y = j*ty;
dx = float(i-x)-(int)(i-x);
dy = float(j-y)-(int)(j-y);
float temp=0.0;
for(m=-1;m<=2;m++)
{
for(n=-1;n<=2;n++)
{
int HIndex,WIndex;
HIndex=(y+n);
WIndex=(x+m);
if (HIndex<0) {
HIndex=0;
}
else if(HIndex>this->getHeight())
{
HIndex=this->getHeight()-1;
}
if (WIndex<0) {
WIndex=0;
}
else if(WIndex>this->getWidth())
{
WIndex=this->getWidth()-1;
}
temp+=this->getPixel(HIndex,WIndex)*R(m-dx)*R(dy-n);
}
}
result->setPixel(j, i, temp);
}
}
return result;
}
You haven't said how different the results are. If they're very close, say within 1 or 2 in each RGB channel, this could be explained simply by roundoff differences.
There is more than one algorithm for Bicubic interpolation. Don Mitchell and Arun Netravali did an analysis and came up with a single formula to describe a number of them: http://www.mentallandscape.com/Papers_siggraph88.pdf
Edit: One more thing, the individual filter coefficients should be summed up and used to divide the final value at the end, to normalize the values. And I'm not sure why you have m-dx for one and dy-n for the other, shouldn't they be the same sign?
r=R(m-dx)*R(dy-n);
r_sum+=r;
temp+=this->getPixel(HIndex,WIndex)*r;
. . .
result->setPixel(j, i, temp/r_sum);
Change:
else if(HIndex>this->getHeight())
to:
else if(HIndex >= this->getHeight())
and change:
else if(WIndex>this->getWidth())
to:
else if(WIndex >= this->getWidth())
EDIT
Also change:
for(m=-1;m<=2;m++)
{
for(n=-1;n<=2;n++)
to:
for(m = -1; m <= 1; m++)
{
for(n = -1; n <= 1; n++)
I'm trying to access a specific row in a matrix but am having a hard time doing so.
I want to get the value at row j, column i but I don't think my algorithm is correct. I'm using OpenCV's Mat for my matrix and accessing it through the data member.
Here is how I am attempting to access values:
plane.data[i + j*plane.rows]
Where i = the column, j = the row. Is this correct? The Matrix is 1 plane from a YUV matrix.
Any help would be appreciated! Thanks.
No, your are wrong
plane.data[i + j*plane.rows] is not a good way to access pixel. Your pointer must depend on type of the matrix and its depth.
You should use at() operator of the matrix.
To make it simple here is a code sample which access each pixel of a matrix and prints it. It works almost for every matrix type and for any number of channels:
void printMat(const Mat& M){
switch ( (M.dataend-M.datastart) / (M.cols*M.rows*M.channels())){
case sizeof(char):
printMatTemplate<unsigned char>(M,true);
break;
case sizeof(float):
printMatTemplate<float>(M,false);
break;
case sizeof(double):
printMatTemplate<double>(M,false);
break;
}
}
template <typename T>
void printMatTemplate(const Mat& M, bool isInt = true){
if (M.empty()){
printf("Empty Matrix\n");
return;
}
if ((M.elemSize()/M.channels()) != sizeof(T)){
printf("Wrong matrix type. Cannot print\n");
return;
}
int cols = M.cols;
int rows = M.rows;
int chan = M.channels();
char printf_fmt[20];
if (isInt)
sprintf_s(printf_fmt,"%%d,");
else
sprintf_s(printf_fmt,"%%0.5g,");
if (chan > 1){
// Print multi channel array
for (int i = 0; i < rows; i++){
for (int j = 0; j < cols; j++){
printf("(");
const T* Pix = &M.at<T>(i,j);
for (int c = 0; c < chan; c++){
printf(printf_fmt,Pix[c]);
}
printf(")");
}
printf("\n");
}
printf("-----------------\n");
}
else {
// Single channel
for (int i = 0; i < rows; i++){
const T* Mi = M.ptr<T>(i);
for (int j = 0; j < cols; j++){
printf(printf_fmt,Mi[j]);
}
printf("\n");
}
printf("\n");
}
}
I do not think there is anything different between accessing RGB Mat and YUV Mat. Its just the colorspace different.
Please refer to http://opencv.willowgarage.com/wiki/faq#Howtoaccessmatrixelements.3F on how to access each pixel.