opencv c++ inverse fourier transformation does not give same image - opencv

I have a bgr image and convert to lab channels.
I tried to check if the idft image of the result of dft of L channel image is the same.
// MARK: Split LAB Channel each
cv::Mat lab_resized_host_image;
cv::cvtColor(resized_host_image, lab_resized_host_image, cv::COLOR_BGR2Lab);
imshow("lab_resized_host_image", lab_resized_host_image);
cv::Mat channel_L_host_image, channel_A_host_image, channel_B_host_image;
std::vector<cv::Mat> channel_LAB_host_image(3);
cv::split(lab_resized_host_image, channel_LAB_host_image);
// MARK: DFT the channel_L host image.
channel_L_host_image = channel_LAB_host_image[0];
imshow("channel_L_host_image", channel_L_host_image);
cv::Mat padded_L;
int rows_L = getOptimalDFTSize(channel_L_host_image.rows);
int cols_L = getOptimalDFTSize(channel_L_host_image.cols);
copyMakeBorder(channel_L_host_image, padded_L, 0, rows_L - channel_L_host_image.rows, 0, cols_L - channel_L_host_image.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes_L[] = {Mat_<float>(padded_L), Mat::zeros(padded_L.size(), CV_32F)};
Mat complexI_L;
merge(planes_L, 2, complexI_L);
dft(complexI_L, complexI_L);
// MARK: iDFT Channel_L.
Mat complexI_channel_L = complexI_L;
Mat complexI_channel_L_idft;
cv::dft(complexI_L, complexI_channel_L_idft, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
normalize(complexI_channel_L_idft, complexI_channel_L_idft, 0, 1, NORM_MINMAX);
imshow("complexI_channel_L_idft", complexI_channel_L_idft);
Each imshow give me different image... I think normalization would be error...
what is wrong? help!
original image
idft

OpenCV’s FFT is not normalized by default. One of the forward/backward transform pair must be normalized for the pair to reproduce the input values. Simply add cv::DFT_SCALE to the options:
cv::dft(complexI_mid_frequency_into_channel_A, iDFT_mid_frequency_into_channel_A, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT|cv::DFT_SCALE);

Related

Is it possible to recognize so minimal changes between noisy images in OpenCV?

I want to detect the very minimal movement of a conveyor belt using image evaluation (Resolution: 31x512, image rate: 1000 per second.). The moment of belt-start is important for me.
If I do cv::absdiff between two subsequent images, I obtain very noisy result:
According to the mechanical rotation sensor of the motor, the movement starts here:
I tried to threshold the abs-diff image with a cascade of erosion and dilation, but I could detect the earliest change more than second too late in this image:
Is it possible to find the change earlier?
Here is the sequence of the Images without changes (according to motor sensor):
In this sequence the movement begins in the middle image:
Looks like I've found a solution which works in MY case.
Instead of comparing the image changes in space-domain, the cross-correlation should be applied:
I convert both images to DFT, multiply DFT-Mats and convert back. The max pixel value is the center of the correlation. As long as the images are same, the max-pix remains in the same position and moves otherwise.
The actual working code uses 3 images, 2 DFT multiplication result between images 1,2 and 2,3:
Mat img1_( 512, 32, CV_16UC1 );
Mat img2_( 512, 32, CV_16UC1 );
Mat img3_( 512, 32, CV_16UC1 );
//read the data in the images wohever you want. I read from MHD-file
//Set ROI (if required)
Mat img1 = img1_(cv::Rect(0,200,32,100));
Mat img2 = img2_(cv::Rect(0,200,32,100));
Mat img3 = img3_(cv::Rect(0,200,32,100));
//Float mats for DFT
Mat img1f;
Mat img2f;
Mat img3f;
//DFT and produtcts mats
Mat dft1,dft2,dft3,dftproduct,dftproduct2;
//Calculate DFT of both images
img1.convertTo(img1f, CV_32FC1);
cv::dft(img1f, dft1);
img2.convertTo(img3f, CV_32FC1);
cv::dft(img3f, dft3);
img3.convertTo(img2f, CV_32FC1);
cv::dft(img2f, dft2);
//Multiply DFT Mats
cv::mulSpectrums(dft1,dft2,dftproduct,true);
cv::mulSpectrums(dft2,dft3,dftproduct2,true);
//Convert back to space domain
cv::Mat result,result2;
cv::idft(dftproduct,result);
cv::idft(dftproduct2,result2);
//Not sure if required, I needed it for visualizing
cv::normalize( result, result, 0, 255, NORM_MINMAX, CV_8UC1);
cv::normalize( result2, result2, 0, 255, NORM_MINMAX, CV_8UC1);
//Find maxima positions
double dummy;
Point locdummy; Point maxLoc1; Point maxLoc2;
cv::minMaxLoc(result, &dummy, &dummy, &locdummy, &maxLoc1);
cv::minMaxLoc(result2, &dummy, &dummy, &locdummy, &maxLoc2);
//Calculate products simply fot having one value to compare
int maxlocProd1 = maxLoc1.x*maxLoc1.y;
int maxlocProd2 = maxLoc2.x*maxLoc2.y;
//Calculate absolute difference of the products. Not 0 means movement
int absPosDiff = std::abs(maxlocProd2-maxlocProd1);
if ( absPosDiff>0 )
{
std::cout << id<< std::endl;
break;
}

opencv drawing a 2d histogram

I'm wondering how to plot a 2d histogram of an HSV Mat in opencv c++. My current code attempting to display it fails miserably. I've looked around on how to plot histograms and all the ones I've found were those plotting them as independent 1d histograms.
Here's my current output with the number of hue bins being 30 and saturation bins being 32:
Here's another output with the number of hue bins being 7 and saturaation bins being 5:
I would like it to look more like the result here
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_calculation/histogram_calculation.html
I also noticed whenever I do cout << Hist.size it gives me 50x50. Am I to understand that just means the first dimension of the array is 250 in size?
Also, how does one sort the histogram from highest to lowest (or vice versa) value frequency? That is another problem I am trying to solve.
My current function is as follows.
void Perform_Hist(Mat& MeanShift, Mat& Pyramid_Result, Mat& BackProj){
Mat HSV, Hist;
int histSize[] = {hbins, sbins};
int channels[] = {0, 1};
float hranges[] = {0, 180};
float sranges[] = {0, 256};
const float* ranges[] = {hranges, sranges};
cvtColor(MeanShift, HSV, CV_BGR2HSV);
Mat PyrGray = Pyramid_Result.clone();
calcHist(&HSV, 1, channels, Mat(), Hist, 2, histSize, ranges, true, false);
normalize(Hist, Hist, 0, 255, NORM_MINMAX, -1, Mat());
invert(Hist, Hist, 1);
calcBackProject(&PyrGray, 1, channels, Hist, BackProj, ranges, 1, true);
double maxVal = 0; minMaxLoc(Hist, 0, &maxVal, 0, 0);
int scale = 10;
Mat histImage = Mat::zeros(sbins*scale, hbins*10, CV_8UC3);
for(int i = 1; i < hbins * sbins; i++){
line(histImage,
Point(hbins*sbins*(i-1), sbins - cvRound(Hist.at<float>(i-1))),
Point(hbins*sbins*(i-1), sbins - cvRound(Hist.at<float>(i))),
Scalar(255,0,0), 2, 8, 0);
}
imshow (HISTOGRAM, histImage);
}
Did you mean something like this?
it is HSV histogram showed as 3D graph
V is ignored to get to 3D (otherwise it would be 4D graph ...)
if yes then this is how to do it (I do not use OpenCV so adjust it to your needs):
convert source image to HSV
compute histogram ignoring V value
all colors with the same H,S are considered as single color no matter what the V is
you can ignore any other but the V parameter looks like the best choice
draw the graph
first draw ellipse with darker color (HSV base disc)
then for each dot take the corresponding histogram value and draw vertical line with brighter color. Line size is proportional to the histogram value
Here is the C++ code I did this with:
picture pic0,pic1,pic2,zed;
int his[65536];
DWORD w;
int h,s,v,x,y,z,i,n;
double r,a;
color c;
// compute histogram (ignore v)
pic2=pic0; // copy input image pic0 to pic2
pic2.rgb2hsv(); // convert to HSV
for (x=0;x<65536;x++) his[x]=0; // clear histogram
for (y=0;y<pic2.ys;y++) // compute it
for (x=0;x<pic2.xs;x++)
{
c=pic2.p[y][x];
h=c.db[picture::_h];
s=c.db[picture::_s];
w=h+(s<<8); // form 16 bit number from 24bit HSV color
his[w]++; // update color usage count ...
}
for (n=0,x=0;x<65536;x++) if (n<his[x]) n=his[x]; // max probability
// draw the colored HSV base plane and histogram
zed =pic1; zed .clear(999); // zed buffer for 3D
pic1.clear(0); // image of histogram
for (h=0;h<255;h++)
for (s=0;s<255;s++)
{
c.db[picture::_h]=h;
c.db[picture::_s]=s;
c.db[picture::_v]=100; // HSV base darker
c.db[picture::_a]=0;
x=pic1.xs>>1; // HSV base disc position centers on the bottom
y=pic1.ys-100;
a=2.0*M_PI*double(h)/256.0; // disc -> x,y
r=double(s)/256.0;
x+=120.0*r*cos(a); // elipse for 3D ilusion
y+= 50.0*r*sin(a);
z=-y;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } x++;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } y++;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } x--;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } y--;
w=h+(s<<8); // get histogram index for this color
i=((pic1.ys-150)*his[w])/n;
c.db[picture::_v]=255; // histogram brighter
for (;(i>0)&&(y>0);i--,y--)
{
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } x++;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } y++;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } x--;
if (zed.p[y][x].dd>=z){ pic1.p[y][x]=c; zed.p[y][x].dd=z; } y--;
}
}
pic1.hsv2rgb(); // convert to RGB to see correct colors
input image is pic0 (rose), output image is pic1 (histogram graph)
pic2 is the pic0 converted to HSV for histogram computation
zed is the Zed buffer for 3D display avoiding Z sorting ...
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
rgb2hsv() and hsv2rgb() ... guess what it does :)
[edit1] your 2D histogram
It looks like you have color coded into 2D array. One axis is H and second is S. So you need to calculate H,S value from array address. If it is linear then for HSV[i][j]:
H=h0+(h1-h0)*i/maxi
S=s0+(s1-s0)*j/maxj
or i,j reversed
h0,h1,s0,s1 are the color ranges
maxi,maxj are the array size
As you can see you also discard V like me so now you have H,S for each cell in histogram 2D array. Where probability is the cell value. Now if you want to draw an image you need to know how to output this (as a 2D graph, 3D, mapping,...). For unsorted 2D graph draw graph where:
x=i+maj*i
y=HSV[i][j]
color=(H,S,V=200);
If you want to sort it then just compute the x axis differently or loop the 2D array in sort order and x just increment
[edit2] update of code and some images
I have repaired the C++ code above (wrong Z value sign, changed Z buffer condition and added bigger points for nicer output). Your 2D array colors can be as this:
Where one axis/index is H, the other S and Value is fixed (I choose 200). If your axises are swapped then just mirror it by y=x I think ...
The color sorting is really just an order in which you pick all the colors from array. for example:
v=200; x=0;
for (h=0;h<256;h++)
for (s=0;s<256;s++,x++)
{
y=HSV[h][s];
// here draw line (x,0)->(x,y) by color hsv2rgb(h,s,v);
}
This is the incrementing way. You can compute x from H,S instead to achieve different sorting or swap the fors (x++ must be in the inner loop)
If you want RGB histogram plot instead see:
how to plot rgb color histogram of image with objective c

comparing blob detection and Structural Analysis and Shape Descriptors in opencv

I need to use blob detection and Structural Analysis and Shape Descriptors (more specifically findContours, drawContours and moments) to detect colored circles in an image. I need to know the pros and cons of each method and which method is better. Can anyone show me the differences between those 2 methods please?
As #scap3y suggested in the comments I'd go for a much simpler approach. What I'm always doing in these cases is something similar to this:
// Convert your image to HSV color space
Mat hsv;
hsv.create(originalImage.size(), CV_8UC3);
cvtColor(originalImage,hsv,CV_RGB2HSV);
// Chose the range in each of hue, saturation and value and threshold the other pixels
Mat thresholded;
uchar loH = 130, hiH = 170;
uchar loS = 40, hiS = 255;
uchar loV = 40, hiV = 255;
inRange(hsv, Scalar(loH, loS, loV), Scalar(hiH, hiS, hiV), thresholded);
// Find contours in the image (additional step could be to
// apply morphologyEx() first)
vector<vector<Point>> contours;
findContours(thresholded,contours,CV_RETR_EXTERNAL,CHAIN_APPROX_SIMPLE);
// Draw your contours as ellipses into the original image
for(i=0;i<(int)valuable_rectangle_indices.size();i++) {
rect=minAreaRect(contours[valuable_rectangle_indices[i]]);
ellipse(originalImage, rect, Scalar(0,0,255)); // draw ellipse
}
The only thing left for you to do now is to figure out in what range your markers are in HSV color space.

Sorting a matrix and placing it in one row

I am trying to figure a way of sorting a 3x3 row into a 9x1.
So i have following:
I want to end up with this:
This is what i end up doing so far:
Rect roi(y-1,x-1,kernel,kernel);
Mat image_roi = image(roi);
Mat image_sort(kernel, kernel, CV_8U);
cv::sort(image_roi, image_sort, CV_SORT_ASCENDING+CV_SORT_EVERY_ROW);
The code is not functional, currently i cannot find any data in my image_sort after its "sorted".
Sure you have single-channel grey level images? Try:
cv::Mat image_sort = cv::Mat::zeros(rect.height, rect.width, rect.type()); // allocated memory
image(roi).copyTo(image_sort); // copy data in image_sorted
std::sort(image_sort.data, image_sort.dataend); // call std::sort
cv::Mat vectorized = image_sort.reshape(1, 1); // reshaped your WxH matrix into a 1x(W*H) vector.

OpenCV - Image Stitching

I am using following code to stitch to input images. For an unknown
reason the output result is crap!
It seems that the homography matrix is wrong (or is affected wrongly)
because the transformed image is like an "exploited star"!
I have commented the part that I guess is the source of the problem
but I cannot realize it.
Any help or point is appriciated!
Have a nice day,
Ali
void Stitch2Image(IplImage *mImage1, IplImage *mImage2)
{
// Convert input images to gray
IplImage* gray1 = cvCreateImage(cvSize(mImage1->width, mImage1->height), 8, 1);
cvCvtColor(mImage1, gray1, CV_BGR2GRAY);
IplImage* gray2 = cvCreateImage(cvSize(mImage2->width, mImage2->height), 8, 1);
cvCvtColor(mImage2, gray2, CV_BGR2GRAY);
// Convert gray images to Mat
Mat img1(gray1);
Mat img2(gray2);
// Detect FAST keypoints and BRIEF features in the first image
FastFeatureDetector detector(50);
BriefDescriptorExtractor descriptorExtractor;
BruteForceMatcher<L1<uchar> > descriptorMatcher;
vector<KeyPoint> keypoints1;
detector.detect( img1, keypoints1 );
Mat descriptors1;
descriptorExtractor.compute( img1, keypoints1, descriptors1 );
/* Detect FAST keypoints and BRIEF features in the second image*/
vector<KeyPoint> keypoints2;
detector.detect( img1, keypoints2 );
Mat descriptors2;
descriptorExtractor.compute( img2, keypoints2, descriptors2 );
vector<DMatch> matches;
descriptorMatcher.match(descriptors1, descriptors2, matches);
if (matches.size()==0)
return;
vector<Point2f> points1, points2;
for(size_t q = 0; q < matches.size(); q++)
{
points1.push_back(keypoints1[matches[q].queryIdx].pt);
points2.push_back(keypoints2[matches[q].trainIdx].pt);
}
// Create the result image
result = cvCreateImage(cvSize(mImage2->width * 2, mImage2->height), 8, 3);
cvZero(result);
// Copy the second image in the result image
cvSetImageROI(result, cvRect(mImage2->width, 0, mImage2->width, mImage2->height));
cvCopy(mImage2, result);
cvResetImageROI(result);
// Create warp image
IplImage* warpImage = cvCloneImage(result);
cvZero(warpImage);
/************************** Is there anything wrong here!? *******************/
// Find homography matrix
Mat H = findHomography(Mat(points1), Mat(points2), 8, 3.0);
CvMat HH = H; // Is this line converted correctly?
// Transform warp image
cvWarpPerspective(mImage1, warpImage, &HH);
// Blend
blend(result, warpImage);
/*******************************************************************************/
cvReleaseImage(&gray1);
cvReleaseImage(&gray2);
cvReleaseImage(&warpImage);
}
This is what I would suggest you to try, in this order:
1) Use CV_RANSAC option for homography. Refer http://opencv.willowgarage.com/documentation/cpp/calib3d_camera_calibration_and_3d_reconstruction.html
2) Try other descriptors, particularly SIFT or SURF which ship with OpenCV. For some images FAST or BRIEF descriptors are not discriminating enough. EDIT (Aug '12): The ORB descriptors, which are based on BRIEF, are quite good and fast!
3) Try to look at the Homography matrix (step through in debug mode or print it) and see if it is consistent.
4) If above does not give you a clue, try to look at the matches that are formed. Is it matching one point in one image with a number of points in the other image? If so the problem again should be with the descriptors or the detector.
My hunch is that it is the descriptors (so 1) or 2) should fix it).
Also switch to Hamming distance instead of L1 distance in BruteForceMatcher. BRIEF descriptors are supposed to be compared using Hamming distance.
Your homography, might calculated based on wrong matches and thus represent bad allignment.
I suggest to path the matrix through additional check of interdependancy between its rows.
You can use the following code:
bool cvExtCheckTransformValid(const Mat& T){
// Check the shape of the matrix
if (T.empty())
return false;
if (T.rows != 3)
return false;
if (T.cols != 3)
return false;
// Check for linear dependency.
Mat tmp;
T.row(0).copyTo(tmp);
tmp /= T.row(1);
Scalar mean;
Scalar stddev;
meanStdDev(tmp,mean,stddev);
double X = abs(stddev[0]/mean[0]);
printf("std of H:%g\n",X);
if (X < 0.8)
return false;
return true;
}

Resources