OpenCV contour evolution - opencv

I have a contour that I would like to "snap" to edges in an image. That is, some thing like Intelligent Scissors, but for the whole contour at the same. A user has provided a rough sketch of the outline of an object, and I'd like to clean it up by "pushing" each point on the contour to the nearest point in an edge image.
Does something like this exist in OpenCV?

You can mimic active contours using cv::grabCut as suggested. You choose the radius of attraction (how far from the original position the curve can evolve), and by using dilated and eroded images, you define the unknown region around the contour.
// cv::Mat img, mask; // contour on mask as filled polygon
if ( mask.size()!=img.size() )
CV_Error(CV_StsError,"ERROR");
int R = 32; // radius of attraction
cv::Mat strel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(2*R+1,2*R+1) );
cv::Mat gc( mask.size(), CV_8UC1, cv::Scalar(cv::GC_BGD) );
cv::Mat t;
cv::dilate( mask, t, strel );
gc.setTo( cv::GC_PR_BGD, t );
gc.setTo( cv::GC_PR_FGD, mask ); // 3
cv::erode( mask, t, strel );
gc.setTo( cv::GC_FGD, t ); // 1
cv::grabCut( img, gc, cv::Rect(), cv::Mat(), cv::Mat(), 2 );
gc &= 0x1; // either foreground or probably foreground
gc *= 255; // so that you see it
What you may loose, is the topology of the contour. Some processing required there. Also, you cannot control the curvature or smoothness of the contour and it's not really contour evolution in sense.
Only if you are interested, ITK geodesic active contour might be what you are looking for http://www.itk.org/Doxygen/html/classitk_1_1GeodesicActiveContourLevelSetImageFilter.html

Related

comparing blob detection and Structural Analysis and Shape Descriptors in opencv

I need to use blob detection and Structural Analysis and Shape Descriptors (more specifically findContours, drawContours and moments) to detect colored circles in an image. I need to know the pros and cons of each method and which method is better. Can anyone show me the differences between those 2 methods please?
As #scap3y suggested in the comments I'd go for a much simpler approach. What I'm always doing in these cases is something similar to this:
// Convert your image to HSV color space
Mat hsv;
hsv.create(originalImage.size(), CV_8UC3);
cvtColor(originalImage,hsv,CV_RGB2HSV);
// Chose the range in each of hue, saturation and value and threshold the other pixels
Mat thresholded;
uchar loH = 130, hiH = 170;
uchar loS = 40, hiS = 255;
uchar loV = 40, hiV = 255;
inRange(hsv, Scalar(loH, loS, loV), Scalar(hiH, hiS, hiV), thresholded);
// Find contours in the image (additional step could be to
// apply morphologyEx() first)
vector<vector<Point>> contours;
findContours(thresholded,contours,CV_RETR_EXTERNAL,CHAIN_APPROX_SIMPLE);
// Draw your contours as ellipses into the original image
for(i=0;i<(int)valuable_rectangle_indices.size();i++) {
rect=minAreaRect(contours[valuable_rectangle_indices[i]]);
ellipse(originalImage, rect, Scalar(0,0,255)); // draw ellipse
}
The only thing left for you to do now is to figure out in what range your markers are in HSV color space.

background detection having font with same color as background

I have image as following (so, this is white figure on red background. This figure have two thin red lines inside it)
and I want to receive following image (remove red background but not two red lines inside figure)
I was trying convexHull from OpenCV, but, obviously that approach works only on convex figures. My feeling that convolution may help here, but have no real idea yet.
Dilate and Erode should work for your example:
Mat image = imread("image1.jpg");
int erosion_size = 5;
int dilation_size = 6;
int threshold_value = 200;
Mat mask;
cvtColor( image, mask, CV_BGR2GRAY );
//BINARY THRESHOLDING
threshold( mask, mask, threshold_value, 255, 0);
Mat erosion_element = getStructuringElement(MORPH_RECT, Size( 2*erosion_size + 1, 2*erosion_size+1 ), Point( erosion_size, erosion_size ) );
Mat dilation_element = getStructuringElement(MORPH_RECT, Size( 2*dilation_size + 1, 2*dilation_size+1 ), Point( dilation_size, dilation_size ) );
dilate(mask, mask, erosion_element);
erode(mask, mask, dilation_element);
Mat target;
image.copyTo(target, mask);
imshow("hello",target);
waitKey();
OutPut:
suggestions: :)
just floodfill()
convexHull does what the name says, but has a companion, convexitydefects
he-he, it looks like convolution with circle having diameter slightly bigger (8 pixels, for example) than line thickness works!
so, algorithm will looks as following:
convolve with circle having diameter slightly bigger than line
thickness
normalize convolution, you are interested in values
greater than 0.95-0.97
for each point on convolution function
(with values greater than 0.95-0.97) you should zero all
neighborhoods which are in range R=diameter/2

color a grayscale image with opencv

i'm using openNI for some project with kinect sensor. i'd like to color the users pixels given with the depth map. now i have pixels that goes from white to black, but i want from red to black. i've tried with alpha blending, but my result is only that i have pixels from pink to black because i add (with addWeight) red+white = pink.
this is my actual code:
layers = device.getDepth().clone();
cvtColor(layers, layers, CV_GRAY2BGR);
Mat red = Mat(240,320, CV_8UC3, Scalar(255,0,0));
Mat red_body; // = Mat::zeros(240,320, CV_8UC3);
red.copyTo(red_body, device.getUserMask());
addWeighted(red_body, 0.8, layers, 0.5, 0.0, layers);
where device.getDepth() returns a cv::Mat with depth map and device.getUserMask() returns a cv::Mat with user pixels (only white pixels)
some advice?
EDIT:
one more thing:
thanks to sammy answer i've done it. but actually i don't have values exactly from 0 to 255, but from (for example) 123-220.
i'm going to find minimum and maximum via a simple for loop (are there better way?), and how can i map my values from min-max to 0-255 ?
First, OpenCV's default color format is BGR not RGB. So, your code for creating the red image should be
Mat red = Mat(240,320, CV_8UC3, Scalar(0,0,255));
For red to black color map, you can use element wise multiplication instead of alpha blending
Mat out = red_body.mul(layers, 1.0/255);
You can find the min and max values of a matrix M using
double minVal, maxVal;
minMaxLoc(M, &minVal, &maxVal, 0, 0);
You can then subtract the minValue and scale with a factor
double factor = 255.0/(maxVal - minVal);
M = factor*(M -minValue)
Kinda clumsy and slow, but maybe split layers, copy red_body (make it a one channel Mat, not 3) to the red channel, merge them back into layers?
Get the same effect, but much faster (in place) with reshape:
layers = device.getDepth().clone();
cvtColor(layers, layers, CV_GRAY2BGR);
Mat red = Mat(240,320, CV_8UC1, Scalar(255)); // One channel
Mat red_body;
red.copyTo(red_body, device.getUserMask());
Mat flatLayer = layers.reshape(1,240*320); // presumed dimensions of layer
red_body.reshape(0,240*320).copyTo(flatLayer.col(0));
// layers now has the red from red_body

Filling holes inside a binary object

I have a problem with filling white holes inside a black coin so that I can have only 0-255 binary images with filled black coins. I have used a Median filter to accomplish it but in that case connection bridge between coins grows and it goes impossible to recognize them after several times of erosion... So I need a simple floodFill like method in opencv
Here is my image with holes:
EDIT: floodfill like function must fill holes in big components without prompting X, Y coordinates as a seed...
EDIT: I tried to use the cvDrawContours function but it doesn't fill contours inside bigger ones.
Here is my code:
CvMemStorage mem = cvCreateMemStorage(0);
CvSeq contours = new CvSeq();
CvSeq ptr = new CvSeq();
int sizeofCvContour = Loader.sizeof(CvContour.class);
cvThreshold(gray, gray, 150, 255, CV_THRESH_BINARY_INV);
int numOfContours = cvFindContours(gray, mem, contours, sizeofCvContour, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
System.out.println("The num of contours: "+numOfContours); //prints 87, ok
Random rand = new Random();
for (ptr = contours; ptr != null; ptr = ptr.h_next()) {
Color randomColor = new Color(rand.nextFloat(), rand.nextFloat(), rand.nextFloat());
CvScalar color = CV_RGB( randomColor.getRed(), randomColor.getGreen(), randomColor.getBlue());
cvDrawContours(gray, ptr, color, color, -1, CV_FILLED, 8);
}
CanvasFrame canvas6 = new CanvasFrame("drawContours");
canvas6.showImage(gray);
Result: (you can see black holes inside each coin)
There are two methods to do this:
1) Contour Filling:
First, invert the image, find contours in the image, fill it with black and invert back.
des = cv2.bitwise_not(gray)
contour,hier = cv2.findContours(des,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
cv2.drawContours(des,[cnt],0,255,-1)
gray = cv2.bitwise_not(des)
Resulting image:
2) Image Opening:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
res = cv2.morphologyEx(gray,cv2.MORPH_OPEN,kernel)
The resulting image is as follows:
You can see, there is not much difference in both cases.
NB: gray - grayscale image, All codes are in OpenCV-Python
Reference. OpenCV Morphological Transformations
A simple dilate and erode would close the gaps fairly well, I imagine. I think maybe this is what you're looking for.
A more robust solution would be to do an edge detect on the whole image, and then a hough transform for circles. A quick google shows there are code samples available in various languages for size invariant detection of circles using a hough transform, so hopefully that will give you something to go on.
The benefit of using the hough transform is that the algorithm will actually give you an estimate of the size and location of every circle, so you can rebuild an ideal image based on that model. It should also be very robust to overlap, especially considering the quality of the input image here (i.e. less worry about false positives, so can lower the threshold for results).
You might be looking for the Fillhole transformation, an application of morphological image reconstruction.
This transformation will fill the holes in your coins, even though at the cost of also filling all holes between groups of adjacent coins. The Hough space or opening-based solutions suggested by the other posters will probably give you better high-level recognition results.
In case someone is looking for the cpp implementation -
std::vector<std::vector<cv::Point> > contours_vector;
cv::findContours(input_image, contours_vector, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
cv::Mat contourImage(input_image.size(), CV_8UC1, cv::Scalar(0));
for ( ushort contour_index = 0; contour_index < contours_vector.size(); contour_index++) {
cv::drawContours(contourImage, contours_vector, contour_index, cv::Scalar(255), -1);
}
cv::imshow("con", contourImage);
cv::waitKey(0);
Try using cvFindContours() function. You can use it to find connected components. With the right parameters this function returns a list with the contours of each connected components.
Find the contours which represent a hole. Then use cvDrawContours() to fill up the selected contour by the foreground color thereby closing the holes.
I think if the objects are touched or crowded, there will be some problems using the contours and the math morophology opening.
Instead, the following simple solution is found and tested. It is working very well, and not only for this images, but also for any other images.
here is the steps (optimized) as seen in http://blogs.mathworks.com/steve/2008/08/05/filling-small-holes/
let I: the input image
1. filled_I = floodfill(I). // fill every hole in the image.
2. inverted_I = invert(I)`.
3. holes_I = filled_I AND inverted_I. // finds all holes
4. cc_list = connectedcomponent(holes_I) // list of all connected component in holes_I.
5. holes_I = remove(cc_list,holes_I, smallholes_threshold_size) // remove all holes from holes_I having size > smallholes_threshold_size.
6. out_I = I OR holes_I. // fill only the small holes.
In short, the algorithm is just to find all holes, remove the big ones then write the small ones only on the original image.
I've been looking around the internet to find a proper imfill function (as the one in Matlab) but working in C with OpenCV. After some reaserches, I finally came up with a solution :
IplImage* imfill(IplImage* src)
{
CvScalar white = CV_RGB( 255, 255, 255 );
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 3);
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvFindContours(src, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
cvZero( dst );
for( ; contour != 0; contour = contour->h_next )
{
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
}
IplImage* bin_imgFilled = cvCreateImage(cvGetSize(src), 8, 1);
cvInRangeS(dst, white, white, bin_imgFilled);
return bin_imgFilled;
}
For this: Original Binary Image
Result is: Final Binary Image
The trick is in the parameters setting of the cvDrawContours function:
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
dst = destination image
contour = pointer to the first contour
white = color used to fill the contour
0 = Maximal level for drawn contours. If 0, only contour is drawn
CV_FILLED = Thickness of lines the contours are drawn with. If it is negative (For example, =CV_FILLED), the contour interiors are drawn.
More info in the openCV documentation.
There is probably a way to get "dst" directly as a binary image but I couldn't find how to use the cvDrawContours function with binary values.

OpenCV warping from one triangle to another

I would like to map one triangle inside an OpenCV Mat to another one, pretty much like warpAffine does (check it here), but for triangles instead of quads, in order to use it in a Delaunay triangulation.
I know one is able to use a mask, but I'd like to know if there's a better solution.
I have copied the above image and the following C++ code from my post Warp one triangle to another using OpenCV ( C++ / Python ). The comments in the code below should provide a good idea what is going on. For more details and for python code you can visit the above link. All the pixels inside triangle tri1 in img1 are transformed to triangle tri2 in img2. Hope this helps.
void warpTriangle(Mat &img1, Mat &img2, vector<Point2f> tri1, vector<Point2f> tri2)
{
// Find bounding rectangle for each triangle
Rect r1 = boundingRect(tri1);
Rect r2 = boundingRect(tri2);
// Offset points by left top corner of the respective rectangles
vector<Point2f> tri1Cropped, tri2Cropped;
vector<Point> tri2CroppedInt;
for(int i = 0; i < 3; i++)
{
tri1Cropped.push_back( Point2f( tri1[i].x - r1.x, tri1[i].y - r1.y) );
tri2Cropped.push_back( Point2f( tri2[i].x - r2.x, tri2[i].y - r2.y) );
// fillConvexPoly needs a vector of Point and not Point2f
tri2CroppedInt.push_back( Point((int)(tri2[i].x - r2.x), (int)(tri2[i].y - r2.y)) );
}
// Apply warpImage to small rectangular patches
Mat img1Cropped;
img1(r1).copyTo(img1Cropped);
// Given a pair of triangles, find the affine transform.
Mat warpMat = getAffineTransform( tri1Cropped, tri2Cropped );
// Apply the Affine Transform just found to the src image
Mat img2Cropped = Mat::zeros(r2.height, r2.width, img1Cropped.type());
warpAffine( img1Cropped, img2Cropped, warpMat, img2Cropped.size(), INTER_LINEAR, BORDER_REFLECT_101);
// Get mask by filling triangle
Mat mask = Mat::zeros(r2.height, r2.width, CV_32FC3);
fillConvexPoly(mask, tri2CroppedInt, Scalar(1.0, 1.0, 1.0), 16, 0);
// Copy triangular region of the rectangular patch to the output image
multiply(img2Cropped,mask, img2Cropped);
multiply(img2(r2), Scalar(1.0,1.0,1.0) - mask, img2(r2));
img2(r2) = img2(r2) + img2Cropped;
}
You should use the getAffineTransform to find the transform, and use warpAffine to apply it

Resources