Related
I have an image and 2 regions (focus region and de-focus region). I use Open CV, I want to detect near region.
I apply watershed in OpenCV or Canny detector to detect the object. But the object includes near and far region.
So, I need an idea or help from anyone help me apply OpenCV to detect near region image.
Code for show image, that I attached.
private Mat CalculateMapStrength(Mat inputMat){
Imgproc.cvtColor(inputMat,inputMat, Imgproc.COLOR_RGBA2GRAY);
//Compute dx and dy derivatives
Mat dx = new Mat();
Mat dy = new Mat();
Imgproc.Sobel(inputMat, dx, CV_32F, 1, 0);
Imgproc.Sobel(inputMat, dy, CV_32F, 0, 1);
Core.convertScaleAbs(dx,dx);
Core.convertScaleAbs(dy,dy);
Mat outputMat = new Mat();
Core.addWeighted(dx,0.5,dy,0.5,0,outputMat);
return outputMat;
}
Beside, I get Image segmentation by watershed algorithm in OpenCV. Can I compile 2 result for detect object? How to compile that?
public Mat steptowatershed(Mat img)
{
Mat threeChannel = new Mat();
Imgproc.cvtColor(img, threeChannel, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(threeChannel, threeChannel, 100, 255, Imgproc.THRESH_BINARY);
Mat fg = new Mat(img.size(),CvType.CV_8U);
Imgproc.erode(threeChannel,fg,new Mat());
Mat bg = new Mat(img.size(),CvType.CV_8U);
Imgproc.dilate(threeChannel,bg,new Mat());
Imgproc.threshold(bg,bg,1, 128,Imgproc.THRESH_BINARY_INV);
Mat markers = new Mat(img.size(),CvType.CV_8U, new Scalar(0));
Core.add(fg, bg, markers);
Mat result1=new Mat();
WatershedSegmenter segmenter = new WatershedSegmenter();
segmenter.setMarkers(markers);
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGBA2RGB);
result1 = segmenter.process(img);
return result1;
}
public class WatershedSegmenter
{
public Mat markers=new Mat();
public void setMarkers(Mat markerImage)
{
markerImage.convertTo(markers, CvType.CV_32SC1);
}
public Mat process(Mat image)
{
Imgproc.watershed(image,markers);
markers.convertTo(markers,CvType.CV_8U);
return markers;
}
}
i am working on project where i need to detect the contour of differents type of document.
Currently i am able to segment and detect contours using findcontours and everything works fine in most cases.
But, if the document is white color and the background similar to white color, i can't detect the contour.
For example, in this image
or this xhttp://i.stack.imgur.com/9exrg.jpg
i can't detect the white paper.
Here is the code i am using to segment the image in order to detect perfect edge (edges with no holes) / Perfectly straight.
public static Mat process(Mat original){
Mat src = original.clone();
Mat hsvMat = new Mat();
Mat gray = new Mat();
Mat sobx = new Mat();
Mat soby = new Mat();
Mat grad_abs_val_approx = new Mat();
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv_channels = new ArrayList<Mat>(3);
Core.split(hsvMat, hsv_channels);
Mat hue = hsv_channels.get( 0 );
Mat sat = hsv_channels.get( 1 );
Mat val = hsv_channels.get( 2 );
Imgproc.GaussianBlur(val, gray, new Size(9, 9), 2, 2);
Mat imf = new Mat();
gray.convertTo(imf, CV_32FC1, 0.5f, 0.5f);
Imgproc.Sobel(imf, sobx, -1, 1, 0);
Imgproc.Sobel(imf, soby, -1, 0, 1);
sobx = sobx.mul(sobx);
soby = soby.mul(soby);
Mat sumxy = new Mat();
Core.add(sobx,soby, sumxy);
Core.pow(sumxy, 0.5, grad_abs_val_approx);
sobx.release();
soby.release();
sumxy.release();;
Mat filtered = new Mat();
Imgproc.GaussianBlur(grad_abs_val_approx, filtered, new Size(9, 9), 2, 2);
final MatOfDouble mean = new MatOfDouble();
final MatOfDouble stdev = new MatOfDouble();
Core.meanStdDev(filtered, mean, stdev);
Mat thresholded = new Mat();
Imgproc.threshold(filtered, thresholded, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_TOZERO);
Mat converted = new Mat();
thresholded.convertTo(converted, CV_8UC1);
return converted;
}
Using the code above leads to the following result :
As you can notice, the edges is not Perfectly straight (and there is holes). Edges are barely visible and Findcontours fails to detect the contours.
I have tried alls solutions/suggestions described here
therefore, here is my questions :
1) what's wrong with my code ?
2) how can i preprocess the image in order to detect perfect edge (edges with no holes) / Perfectly straight for contour detection ?
Many thanks in advance for your assistance.
When you see that the edges are not easily detected, you can try other complementary approaches.
For example, on this image, the simple matlab code (which you can implement using OpenCV):
I=imread('page.png');
r=I(:,:,1);
g=I(:,:,2);
b=I(:,:,3);
imshow(b>g);
Produces the following result, which you can use using your edge detection code:
I am trying to detect the biggest/larger rectangular shape and draw bounding box to the detected area.
In my use case, very often (and not always) the object that represent the rectangle shape is in color white and the background is also in color very similar to white.
Before detecting contours, I have preprocessed the image in order to detect perfect edge.
My problem is that I can't detect edges perfectly and i have a lot of noise even after blurring and using 'adaptive threshold' or 'threshold'.
The original image i have used for contours detection
I have tried different way to detect perfect edge in different lighting condition without success.
How can I process image in order to detect perfect edge (edges with no holes) for contour detection ?
Below is the code i am using
public static Mat findRectangleX(Mat original) {
Mat src = original.clone();
Mat gray = new Mat();
Mat binary = new Mat();
MatOfPoint2f approxCurve;
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
if (original.type() != CvType.CV_8U) {
Imgproc.cvtColor(original, gray, Imgproc.COLOR_BGR2GRAY);
} else {
original.copyTo(gray);
}
Imgproc.GaussianBlur(gray, gray, new Size(5,5),0);
Imgproc.adaptiveThreshold(gray, binary, 255,Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C,Imgproc.THRESH_BINARY_INV,11, 1);
//Imgproc.threshold(gray, binary,0,255,Imgproc.THRESH_BINARY_INV | Imgproc.THRESH_OTSU);
double maxArea = 0;
Imgproc.findContours(binary, contours, new Mat(),Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
for (int i = 0; i<contours.size();i++) {
MatOfPoint contour = contours.get(i);
MatOfPoint2f temp = new MatOfPoint2f(contour.toArray());
double area = Imgproc.contourArea(contour);
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(temp, approxCurve, Imgproc.arcLength(temp, true) * 0.03, true);
if (approxCurve.total() == 4 ) {
Rect rect = Imgproc.boundingRect(contours.get(i));
Imgproc.rectangle(src, rect.tl(), rect.br(), new Scalar(255, 0, 0, .8), 4);
if(maxArea < area)
maxArea = area;
}
}
Log.v(TAG, "Total contours found : " + contours.size());
Log.v(TAG, "Max area :" + maxArea);
return src;
}
I've search similar problems on stackoverflow and try code sample but any of them worked for me. The difficulty i think is the white objet on white background.
How can I process image in order to sharpen the edges for contour detection ?
How can I detect the biggest/larger rectangle shape and draw rectangle line to the detected shape ?
//Updated at : 20/02/2017
i have tried the solution suggested by #Nejc in the post below. The segmentation is better but i still have holes in contour and findcontours fails in detecting the larger contour.
Below is the code provided by #Nejc and translated to java.
public static Mat process(Mat original){
Mat src = original.clone();
Mat hsvMat = new Mat();
Mat saturation = new Mat();
Mat sobx = new Mat();
Mat soby = new Mat();
Mat grad_abs_val_approx = new Mat();
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv_channels = new ArrayList<Mat>(3);
Core.split(hsvMat, hsv_channels);
Mat hue = hsv_channels.get( 0 );
Mat sat = hsv_channels.get( 1 );
Mat val = hsv_channels.get( 2 );
Imgproc.GaussianBlur(sat, saturation, new Size(9, 9), 2, 2);
Mat imf = new Mat();
saturation.convertTo(imf, CV_32FC1, 0.5f, 0.5f);
Imgproc.Sobel(imf, sobx, -1, 1, 0);
Imgproc.Sobel(imf, soby, -1, 0, 1);
sobx = sobx.mul(sobx);
soby = soby.mul(soby);
Mat abs_x = new Mat();
Core.convertScaleAbs(sobx,abs_x);
Mat abs_y = new Mat();
Core.convertScaleAbs(soby,abs_y);
Core.addWeighted(abs_x, 1, abs_y, 1, 0, grad_abs_val_approx);
sobx.release();
soby.release();
Mat filtered = new Mat();
Imgproc.GaussianBlur(grad_abs_val_approx, filtered, new Size(9, 9), 2, 2);
final MatOfDouble mean = new MatOfDouble();
final MatOfDouble stdev = new MatOfDouble();
Core.meanStdDev(filtered, mean, stdev);
Mat thresholded = new Mat();
Imgproc.threshold(filtered, thresholded, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_TOZERO);
/*
Mat thresholded_bin = new Mat();
Imgproc.threshold(filtered, thresholded_bin, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_BINARY);
Mat converted = new Mat();
thresholded_bin.convertTo(converted, CV_8UC1);
*/
return thresholded;
}
Here is the image that i have got after running the code above
Image after using #Nejc solution
1) Why my translated code does not output the same image like #Nejc ?
The same code applied to same image should produce the same output ?
2) did i miss something when translating ?
3) For my understanding, why did we multiply the image by itself in this instruction sobx = sobx.mul(sobx); ?
I managed to obtain a pretty nice image of the edge by computing an approximation of the absolute value of gradient of the input image.
EDIT: Before I started working, I resized the input image to 5x smaller size. Click here to see it!. If you use my code on that image, the results will be good. If you want to make my code work well with the image of the original size, then either:
multiply Gaussian kernel sizes and sigmas by 5, or
downsample the image by factor 5, execute the algorithm and then upsample the result by factor 5 (this should work much faster than the first option)
This is the result I got:
My procedure relies on two key features. The first is a conversion to appropriate color space. As Jeru Luke stated in his answer , the saturation channel in HSV color space is the good choice here. The second important thing is the estimation of absolute value of gradient. I used sobel operators and some arithmetics for that purpose. I can provide additional explanations if someone requests them.
This is the code I used to obtain the first image.
using namespace std;
using namespace cv;
Mat img_rgb = imread("letter.jpg");
Mat img_hsv;
cvtColor(img_rgb, img_hsv, CV_BGR2HSV);
vector<Mat> channels_hsv;
split(img_hsv, channels_hsv);
Mat channel_s = channels_hsv[1];
GaussianBlur(channel_s, channel_s, Size(9, 9), 2, 2);
Mat imf;
channel_s.convertTo(imf, CV_32FC1, 0.5f, 0.5f);
Mat sobx, soby;
Sobel(imf, sobx, -1, 1, 0);
Sobel(imf, soby, -1, 0, 1);
sobx = sobx.mul(sobx);
soby = soby.mul(soby);
Mat grad_abs_val_approx;
cv::pow(sobx + soby, 0.5, grad_abs_val_approx);
Mat filtered;
GaussianBlur(grad_abs_val_approx, filtered, Size(9, 9), 2, 2);
Scalar mean, stdev;
meanStdDev(filtered, mean, stdev);
Mat thresholded;
cv::threshold(filtered, thresholded, mean.val[0] + stdev.val[0], 1.0, CV_THRESH_TOZERO);
// I scale the image at this point so that it is displayed properly
imshow("image", thresholded/50);
And this is how I computed the second image:
Mat thresholded_bin;
cv::threshold(filtered, thresholded_bin, mean.val[0] + stdev.val[0], 1.0, CV_THRESH_BINARY);
Mat converted;
thresholded_bin.convertTo(converted, CV_8UC1);
vector<vector<Point>> contours;
findContours(converted, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
Mat contour_img = Mat::zeros(converted.size(), CV_8UC1);
drawContours(contour_img, contours, -1, 255);
imshow("contours", contour_img);
Thanks for yours comments and suggestion.
The code provided by #NEJC works perfectly and cover 80% of my use case.
Nevertheless, it does not works with similar case like this
case not solved by the current code
and i don't know why.
Perhaps someone have an idea/clue/solution ?
I continue to improve the code and try to find a more generic solution that can cover more case. I will post it if i ever i find.
In any case, below is the working code based on #NEJC solution and notes.
public static Mat process(Mat original){
Mat src = original.clone();
Mat hsvMat = new Mat();
Mat saturation = new Mat();
Mat sobx = new Mat();
Mat soby = new Mat();
Mat grad_abs_val_approx = new Mat();
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv_channels = new ArrayList<Mat>(3);
Core.split(hsvMat, hsv_channels);
Mat hue = hsv_channels.get( 0 );
Mat sat = hsv_channels.get( 1 );
Mat val = hsv_channels.get( 2 );
Imgproc.GaussianBlur(sat, saturation, new Size(9, 9), 2, 2);
Mat imf = new Mat();
saturation.convertTo(imf, CV_32FC1, 0.5f, 0.5f);
Imgproc.Sobel(imf, sobx, -1, 1, 0);
Imgproc.Sobel(imf, soby, -1, 0, 1);
sobx = sobx.mul(sobx);
soby = soby.mul(soby);
Mat sumxy = new Mat();
Core.add(sobx,soby, sumxy);
Core.pow(sumxy, 0.5, grad_abs_val_approx);
sobx.release();
soby.release();
sumxy.release();;
Mat filtered = new Mat();
Imgproc.GaussianBlur(grad_abs_val_approx, filtered, new Size(9, 9), 2, 2);
final MatOfDouble mean = new MatOfDouble();
final MatOfDouble stdev = new MatOfDouble();
Core.meanStdDev(filtered, mean, stdev);
Mat thresholded = new Mat();
Imgproc.threshold(filtered, thresholded, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_TOZERO);
/*
Mat thresholded_bin = new Mat();
Imgproc.threshold(filtered, thresholded_bin, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_BINARY_INV);
Mat converted = new Mat();
thresholded_bin.convertTo(converted, CV_8UC1);
*/
Mat converted = new Mat();
thresholded.convertTo(converted, CV_8UC1);
return converted;
}
I want to rotate following image by 20 degree at center.
I can achieve this in OpenCV by two different ways:
1. Perspective Transformation
2. Affine Transformation
public void perspectiveXformation(String imgPath, List<Point> sourceCorners,
List<Point> targetCorners) {
// Load image in gray-scale format
Mat matIncomingImg = Highgui.imread(imgPath, 0);
// Check if size of list, process only if there are four points in list.
if (sourceCorners.size() == 4) {
// Convert vector points into Mat type of object.
Mat sourceCornersMat =
Converters.vector_Point2f_to_Mat(sourceCorners);
Mat targetCornersMat =
Converters.vector_Point2f_to_Mat(targetCorners);
Mat matResultant = new Mat();
// Do the Perspective transformation
Mat matPtransform =
Imgproc.getPerspectiveTransform(sourceCornersMat,
targetCornersMat);
Imgproc.warpPerspective(matIncomingImg, matResultant,
matPtransform,
new Size(targetCorners.get(2).x, targetCorners.get(2).y));
Highgui.imwrite("/tmp/perspectiveXform.png", matResultant);
}
}
public void afflineXformation(String imgPath, Point center) {
Mat selectedMat = Highgui.imread(imgPath, 0);
Mat res = Imgproc.getRotationMatrix2D(center, 20, 1.0);
Mat newMat = new Mat();
Imgproc.warpAffine(selectedMat, newMat, res, selectedMat.size());
Highgui.imwrite("/tmp/afflineXform.png", newMat);
}
Which is the preferred way of rotating image ?
I am using following code to stitch to input images. For an unknown
reason the output result is crap!
It seems that the homography matrix is wrong (or is affected wrongly)
because the transformed image is like an "exploited star"!
I have commented the part that I guess is the source of the problem
but I cannot realize it.
Any help or point is appriciated!
Have a nice day,
Ali
void Stitch2Image(IplImage *mImage1, IplImage *mImage2)
{
// Convert input images to gray
IplImage* gray1 = cvCreateImage(cvSize(mImage1->width, mImage1->height), 8, 1);
cvCvtColor(mImage1, gray1, CV_BGR2GRAY);
IplImage* gray2 = cvCreateImage(cvSize(mImage2->width, mImage2->height), 8, 1);
cvCvtColor(mImage2, gray2, CV_BGR2GRAY);
// Convert gray images to Mat
Mat img1(gray1);
Mat img2(gray2);
// Detect FAST keypoints and BRIEF features in the first image
FastFeatureDetector detector(50);
BriefDescriptorExtractor descriptorExtractor;
BruteForceMatcher<L1<uchar> > descriptorMatcher;
vector<KeyPoint> keypoints1;
detector.detect( img1, keypoints1 );
Mat descriptors1;
descriptorExtractor.compute( img1, keypoints1, descriptors1 );
/* Detect FAST keypoints and BRIEF features in the second image*/
vector<KeyPoint> keypoints2;
detector.detect( img1, keypoints2 );
Mat descriptors2;
descriptorExtractor.compute( img2, keypoints2, descriptors2 );
vector<DMatch> matches;
descriptorMatcher.match(descriptors1, descriptors2, matches);
if (matches.size()==0)
return;
vector<Point2f> points1, points2;
for(size_t q = 0; q < matches.size(); q++)
{
points1.push_back(keypoints1[matches[q].queryIdx].pt);
points2.push_back(keypoints2[matches[q].trainIdx].pt);
}
// Create the result image
result = cvCreateImage(cvSize(mImage2->width * 2, mImage2->height), 8, 3);
cvZero(result);
// Copy the second image in the result image
cvSetImageROI(result, cvRect(mImage2->width, 0, mImage2->width, mImage2->height));
cvCopy(mImage2, result);
cvResetImageROI(result);
// Create warp image
IplImage* warpImage = cvCloneImage(result);
cvZero(warpImage);
/************************** Is there anything wrong here!? *******************/
// Find homography matrix
Mat H = findHomography(Mat(points1), Mat(points2), 8, 3.0);
CvMat HH = H; // Is this line converted correctly?
// Transform warp image
cvWarpPerspective(mImage1, warpImage, &HH);
// Blend
blend(result, warpImage);
/*******************************************************************************/
cvReleaseImage(&gray1);
cvReleaseImage(&gray2);
cvReleaseImage(&warpImage);
}
This is what I would suggest you to try, in this order:
1) Use CV_RANSAC option for homography. Refer http://opencv.willowgarage.com/documentation/cpp/calib3d_camera_calibration_and_3d_reconstruction.html
2) Try other descriptors, particularly SIFT or SURF which ship with OpenCV. For some images FAST or BRIEF descriptors are not discriminating enough. EDIT (Aug '12): The ORB descriptors, which are based on BRIEF, are quite good and fast!
3) Try to look at the Homography matrix (step through in debug mode or print it) and see if it is consistent.
4) If above does not give you a clue, try to look at the matches that are formed. Is it matching one point in one image with a number of points in the other image? If so the problem again should be with the descriptors or the detector.
My hunch is that it is the descriptors (so 1) or 2) should fix it).
Also switch to Hamming distance instead of L1 distance in BruteForceMatcher. BRIEF descriptors are supposed to be compared using Hamming distance.
Your homography, might calculated based on wrong matches and thus represent bad allignment.
I suggest to path the matrix through additional check of interdependancy between its rows.
You can use the following code:
bool cvExtCheckTransformValid(const Mat& T){
// Check the shape of the matrix
if (T.empty())
return false;
if (T.rows != 3)
return false;
if (T.cols != 3)
return false;
// Check for linear dependency.
Mat tmp;
T.row(0).copyTo(tmp);
tmp /= T.row(1);
Scalar mean;
Scalar stddev;
meanStdDev(tmp,mean,stddev);
double X = abs(stddev[0]/mean[0]);
printf("std of H:%g\n",X);
if (X < 0.8)
return false;
return true;
}