To apply sharpness function over a certain region of image OpenCV - opencv

I want to apply the Tenengrad algorithm to a central rectangular region inside the image. Assuming that I have the coordinates of the vertices of the rectangular region or maybe one corner and the dimensions, how can I modify the following code to apply the sharpness measure over the selected region ?
double tenengrad(const cv::Mat& src, int ksize)
{
cv::Mat Gx, Gy;
cv::Sobel(src, Gx, CV_64F, 1, 0, ksize);
cv::Sobel(src, Gy, CV_64F, 0, 1, ksize);
cv::Mat FM = Gx.mul(Gx) + Gy.mul(Gy);
double focusMeasure = cv::mean(FM).val[0];
return focusMeasure;
}

cv::Mat imageRegion;
imageRegion = src(cv::Rect(x, y, width, height));
creates a matrix that points to the region of the original image specified by the rectangle (x, y, width, height). Modifying imageRegion will modify the original image src. So you can use imageRegion instead of src
cv::Mat Gx, Gy;
cv::Sobel(imageRegion, Gx, CV_64F, 1, 0, ksize);
cv::Sobel(imageRegion, Gy, CV_64F, 0, 1, ksize);
cv::Mat FM = Gx.mul(Gx) + Gy.mul(Gy);
double focusMeasure = cv::mean(FM).val[0];

Related

How compute divergence and gradient of image in OpenCV?

I know that to implement the following
I would use this code:
Mat o_k;
Mat Lapl;
double lambda;
Laplacian(o_k, Lapl, o_k.depth(), 1, 1, 0, BORDER_REFLECT);
Lapl = 1.0 - 2.0*lambda*Lapl;
However, I am trying to implement in OpenCV the following equation:
I know the div, or divergence, term would be like this, right?
int ksize = parser.get<int>("ksize");
int scale = parser.get<int>("scale");
int delta = parser.get<int>("delta");
Sobel(res, sobelx, CV_64F, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
Sobel(res, sobely, CV_64F, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
div = sobelx + sobely;
Where res is the result of the term in parenthesis. But how I get the term in parenthesis?
Or am I doing this wrong? Would div above actually be equal to the gradient of res? If so, then how do I get the divergence?
EDIT:
According to this link, the magnitude can also be computed as mag = abs(x) + abs(y): https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/sobel_derivatives/sobel_derivatives.html#sobel-derivatives
And since the div of a gradient is the Laplacian, would the below code be equivalent to the 2nd equation?
Sobel(res, sobelx, CV_64F, 1, 0, ksize, scale, delta, BORDER_DEFAULT);
Sobel(res, sobely, CV_64F, 0, 1, ksize, scale, delta, BORDER_DEFAULT);
convertScaleAbs( sobelx, abs_grad_x );
convertScaleAbs( sobely, abs_grad_y );
/// Total Gradient (approximate)
Mat mag;
addWeighted( abs_grad_x, 1, abs_grad_y, 1, 0, mag);
Laplacian(o_k, Lapl, o_k.depth(), 1, 1, 0, BORDER_REFLECT);
Mat top;
top = lambda * Lapl;
Mat result;
divide(top, mag, result);

OpenCV: Why projectPoints() is giving me weird results?

Why is my code snippet giving me weird results for projected points?
//Generate the one 3D Point which i want to project onto 2D plane
vector<Point3d> points_3d;
points_3d.push_back(Point3d(10, 10, 100));
Mat points3d = Mat(points_3d);
//Generate the identity matrix and zero vector for rotation matrix and translation vector
Mat rvec = (Mat_<double>(3, 3) << (1, 0, 0, 0, 1, 0, 0, 0, 1));
Mat tvec = (Mat_<double>(3, 1) << (0, 0, 0));
//Generate a camera intrinsic matrix
Mat K = (Mat_<double>(3,3)
<< (1000, 0, 50,
0, 1000, 50,
0, 0, 1));
//Project the 3D Point onto 2D plane
Mat points_2d;
projectPoints(points_3d, rvec, tvec, K, Mat(), points_2d);
//Output
cout << points_2d;
I get as projected 2D Point
points_2d = (-1.708699427820658e+024, -9.673395654445999e-026)
If i calculate it on a paper on my own, i'm expecting a point points_2d = (150, 150) with that formula:
Add cv::Rodrigues(InputArray src, OutputArray dst, OutputArray jacobian=noArray()). OpenCv uses rotation vector inside calculation instead of rotation matrix. Rodrigues transformation allows you to convert rotation vector to matrix and matrix to vector. Below i attached part of your code with one line added.
//Generate the identity matrix and zero vector for rotation matrix and translation vector
Mat rvec,rMat = (Mat_<double>(3, 3) << (1, 0, 0, 0, 1, 0, 0, 0, 1));
Rodrigues(rMat,rvec); //here
Mat tvec = (Mat_<double>(3, 1) << (0, 0, 0));
And it should work properly. It also will be better to define distortion coefficents as
Mat dist = Mat::zeros(8,1,CV_32f);
EDIT:
One more remark, you have little syntax error in matrix initialization:
cv::Mat rvec,rMat = (cv::Mat_<double>(3, 3) << /* ( */1, 0, 0, 0, 1, 0, 0, 0, 1); //you had error here
cv::Rodrigues(rMat, rvec);
cv::Mat tvec = (cv::Mat_<double>(3, 1) <</* ( */ 0, 0, 0); //and here
It works on my computer after changes.

How to use Edge Orientation histogram for object detection?

I am working on an object detection code and I chose the edge orientation histogram as a descriptor for the matching.
I am facing a problem in the back projected histogram as i don't seem to have a good matching , the back projected image is mostly white, which means that i cannot use meanshift or so for detection of the object.
Please help me regarding this matter:
here is what i've done so far:
Take an initial ROI (the target needed to be detected in the video stream).
convert ROI to grayscale
apply sobel operator for both x, y derivatives.
calculate orientation using opencv phase function (from derivative x and derivative y)
make a histogram of the generated orientation. with the following specs:
(range : 0 to 2 PI) , (single channel) , (256 bins)
normalize the histogram
the code for doing these steps is the following:
Mat ROI_grad_x, ROI_grad_y , ROI_grad , ROI_gray;
Mat ROI_abs_grad_x, ROI_abs_grad_y;
cvtColor(ROI, ROI_gray, CV_BGR2GRAY);
/// Gradient X
Sobel( ROI_gray, ROI_grad_x, CV_16S, 1, 0, 3 );
/// Gradient Y
Sobel( ROI_gray, ROI_grad_y, CV_16S, 0, 1, 3 );
convertScaleAbs( ROI_grad_x, ROI_abs_grad_x );
convertScaleAbs( ROI_grad_y, ROI_abs_grad_y );
addWeighted( ROI_abs_grad_x, 0.5, ROI_abs_grad_y, 0.5, 0, ROI_grad );
Mat ROI_orientation = Mat::zeros(ROI_abs_grad_x.rows, ROI_abs_grad_y.cols, CV_32F); //to store the gradients
Mat ROI_orientation_norm ;
ROI_grad_x.convertTo(ROI_grad_x,CV_32F);
ROI_grad_y.convertTo(ROI_grad_y,CV_32F);
phase(ROI_grad_x, ROI_grad_y, ROI_orientation , false);
Mat ROI_orientation_hist ;
float ROI_orientation_range[] = {0 , CV_PI};
const float *ROI_orientation_histRange[] = {ROI_orientation_range};
int ROI_orientation_histSize =256;
//calcHist( &ROI_orientation, 1, 0, Mat(), ROI_orientation_hist, 1, &ROI_orientation_histSize, &ROI_orientation_histRange , true, false);
calcHist( &ROI_orientation, 1, 0, Mat(), ROI_orientation_hist, 1, &ROI_orientation_histSize, ROI_orientation_histRange , true, false);
normalize( ROI_orientation_hist, ROI_orientation_hist, 0, 255, NORM_MINMAX, -1, Mat() );
then , and for each camera frame captured , I do the following steps:
convert to grayscale
apply sobel operator for both x derivative and y derivative.
compute orientation using phase opencv function (using the 2 derivatives mentioned above)
back project the histogram onto the orientation frame matrix to get the matches.
the code used for this part is the following :
Mat grad_x, grad_y , grad;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel( frame_gray, grad_x, CV_16S, 1, 0, 3 );
/// Gradient Y
Sobel( frame_gray, grad_y, CV_16S, 0, 1, 3 );
convertScaleAbs( grad_x, abs_grad_x );
convertScaleAbs( grad_y, abs_grad_y );
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
///======================
Mat orientation = Mat::zeros(abs_grad_x.rows, abs_grad_y.cols, CV_32F); //to store the gradients
Mat orientation_norm ;
grad_x.convertTo(grad_x,CV_32F);
grad_y.convertTo(grad_y,CV_32F);
phase(grad_x, grad_y, orientation , false);
Mat EOH_backProj ;
calcBackProject( &orientation, 1, 0, ROI_orientation_hist, EOH_backProj, ROI_orientation_histRange, 1, true );
So , what seems to be the problem in my approach ?!
Thanks alot.

Find overlapping/complex circles with OpenCV

I want to compute the red circles radius (fig 2). I have troubles finding these circles using HoughCircles from OpenCV. As you can see in fig. 2 I can only find the little circles in center which are shown in black using HoughCircles.
original fig 2.
Since I know the center of the red circles (which are the same as the red ones), is there a way to compute simply the radius of the red circles ?
Is it also possible to have a generic way of computing radius of circles on a more complex image such as this one :
Edit : Here the interesting part of my code after obtaining fig 2 :
threshold(maskedImage, maskedImage, thresh, 255, THRESH_BINARY_INV | THRESH_OTSU);
std::vector<Vec3f> circles;
// Canny(maskedImage, maskedImage, thresh, thresh * 2, 3);
HoughCircles(maskedImage, circles, CV_HOUGH_GRADIENT, 1, src_gray.rows / 4, cannyThreshold, accumulatorThreshold, 0, 0);
Mat display = src_display.clone();
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// circle center
circle(display, center, 3, Scalar(0, 255, 0), -1, 8, 0);
// circle outline
circle(display, center, radius, Scalar(0, 0, 255), 3, 8, 0);
}
I have tried to use play with cannyThreshold and accumulator without results. Real images are 5x biggers. Here a link for example 1 after threshold.
Thanks
You already know the smaller circles in the image(which you have drawn in black).
Prepare a mask image using these circles so the areas having smaller circles will have non-zero pixels. We'll call it mask:
In the original image, fill these circle areas in a dark color(say black). This will result in an image like your fig 2. We'll call it filled
Threshold the filled image to obtain the dark areas. We'll call it binary. You can use Otsu thresholding for this. Result will look something like this:
Take the distance transform of this binary image. Use an accurate distance estimation method for this. We'll call this dist. It'll look something like this. The colored one is just a heat map for more clarity:
Use the mask to obtain the peak regions from dist. The max value of each such region should give you the radius of the larger circle. You can also do some processing on these regions to arrive at a more reasonable value for radius rather than just picking up the max.
For selecting the regions, you can either find the contours of the mask and then extract that region from dist image, or, since you already know the smaller circles from applying hough-circle transform, prepare a mask from each of those circles and extract that region from dist image. I'm not sure if you can calculate max or other stats by giving a mask. Max will definitely work because the rest of the pixels are 0. You might be able calculate the stats of the region if you extract those pixels to another array.
Figures below show such mask and the extracted region from dist. For this I get a max around 29 which is consistent with the radius of that circle. Note that the images are not to scale.
mask for a circle, extracted region from dist
Here's the code (I'm not using hough-circles transform):
Mat im = imread(INPUT_FOLDER_PATH + string("ex1.jpg"));
Mat gray;
cvtColor(im, gray, CV_BGR2GRAY);
Mat bw;
threshold(gray, bw, 0, 255, CV_THRESH_BINARY|CV_THRESH_OTSU);
// filtering smaller circles: not using hough-circles transform here.
// you can replace this part with you hough-circles code.
vector<int> circles;
vector<vector<Point>> contours;
vector<Vec4i> hierarchy;
findContours(bw, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
for(int idx = 0; idx >= 0; idx = hierarchy[idx][0])
{
Rect rect = boundingRect(contours[idx]);
if (abs(1.0 - ((double)rect.width/rect.height) < .1))
{
Mat mask = Mat::zeros(im.rows, im.cols, CV_8U);
drawContours(mask, contours, idx, Scalar(255, 255, 255), -1);
double area = sum(mask).val[0]/255;
double rad = (rect.width + rect.height)/4.0;
double circArea = CV_PI*rad*rad;
double dif = abs(1.0 - area/circArea);
if (dif < .5 && rad < 50 && rad > 30) // restrict the radius
{
circles.push_back(idx); // store smaller circle contours
drawContours(gray, contours, idx, Scalar(0, 0, 0), -1); // fill circles
}
}
}
threshold(gray, bw, 0, 255, CV_THRESH_BINARY_INV|CV_THRESH_OTSU);
Mat dist, distColor, color;
distanceTransform(bw, dist, CV_DIST_L2, 5);
double max;
Point maxLoc;
minMaxLoc(dist, NULL, &max);
dist.convertTo(distColor, CV_8U, 255.0/max);
applyColorMap(distColor, color, COLORMAP_JET);
imshow("", color);
waitKey();
// extract dist region corresponding to each smaller circle and find max
for(int idx = 0; idx < (int)circles.size(); idx++)
{
Mat masked;
Mat mask = Mat::zeros(im.rows, im.cols, CV_8U);
drawContours(mask, contours, circles[idx], Scalar(255, 255, 255), -1);
dist.copyTo(masked, mask);
minMaxLoc(masked, NULL, &max, NULL, &maxLoc);
circle(im, maxLoc, 4, Scalar(0, 255, 0), -1);
circle(im, maxLoc, (int)max, Scalar(0, 0, 255), 2);
cout << "rad: " << max << endl;
}
imshow("", im);
waitKey();
Results(scaled):
Hope this helps.

Where in my code have I broken the Mat equivalence rule?

I'm trying to achieve background subtraction in openCV 2.2 using the cv namespace (Qt4.7). I have the following code which compiles fine but when running the program breaks because one mat doesn't equal the other but I can't find out where it is and I'm currently going through the API reference to try and find it.
cvtColor( mcolImage, mcolImage, CV_BGR2RGB);
cvtColor( mcolImage, gscaleImage, CV_RGB2GRAY);
acc = Mat(Size(440,320), CV_32FC3);
accSQ = Mat(Size(440,320), CV_32FC3);
//we accumulate into a Mat to get an frames average
Mat avg;
accumulateWeighted(gscaleImage, acc, 3.0, Mat());
accumulateSquare(gscaleImage, accSQ, Mat());
multiply(acc, acc, avg, 1);
Mat sigma, sigmaSQRT;
subtract(accSQ, avg, sigmaSQRT, Mat());
sqrt(sigmaSQRT, sigma); //Holds the standard deviation
Mat fgImage; //hold the foreground image
cv::absdiff(avg,gscaleImage, fgImage);
//GaussianBlur(gscaleImage, gscaleImage, Size(7,7), 2, 2 );
Mat buff ;
//convert to black and white
threshold(fgImage, buff, 75, THRESH_BINARY, 100);
dilate(buff, buff, Mat(3, 3, CV_8UC1), Point(-1, -1), 1, BORDER_CONSTANT, Scalar(1.0, 1.0, 1.0, 0));
erode(buff, buff, Mat(3, 3, CV_8UC1), Point(-1, -1), 1, BORDER_CONSTANT, Scalar(1.0, 1.0, 1.0, 0));
//rectangle(gscaleImage, cvPoint(100, 300), cvPoint(200, 100), cvScalar(255, 255, 255, 0), 1);
QImage colImagetmp((uchar*)mcolImage.data, mcolImage.cols, mcolImage.rows, mcolImage.step,
QImage::Format_RGB888 ); //Colour
QImage gscaleImagetmp ((uchar*)gscaleImage.data, gscaleImage.cols, gscaleImage.rows, gscaleImage.step,
QImage::Format_Indexed8); //Greyscale. I hope
QImage bwImagetmp((uchar*)buff.data, buff.cols, buff.rows, buff.step,
QImage::Format_Indexed8);
//Setup a colour table for the greyscale image
QVector<QRgb> colorTable;
for (int i = 0; i < 256; i++) colorTable.push_back(qRgb(i, i, i));
bwImagetmp.setColorTable(colorTable);
gscaleImagetmp.setColorTable(colorTable);
ui.intDisplay->setPixmap(QPixmap::fromImage(bwImagetmp));
ui.bwDisplay->setPixmap(QPixmap::fromImage(gscaleImagetmp));
ui.colDisplay->setPixmap( QPixmap::fromImage(colImagetmp ));
Thanks for the help in advanced.
Edit:
After going through the code I found that the absdiff(avg, gscaleImage, fgImage); is where the program is crashing. I think it maybe crashing on the second parameter but not sure.
I solved it (I think) by declaring a new temporary Mat and converting that specifically (using avg.convert() ) to match the gscaleImage type and size.

Resources