How to draw contour without filling in OpenCV - opencv

Trying to extract numbers from an image, however I'm having issues with 0, 8 and such because of the enclosed portions.
When I use cv.drawContours(black_img, [contour_num], -1, (255, 255, 255), thickness=cv.FILLED)
it obviously fills the inside of the contour as well. How do I prevent this? How do I draw only the "outer" part of the number? Thanks
drawn contour
orig image

cv.drawContours(black_img, [contour_num], -1, (255, 255, 255), thickness=cv.FILLED)
cv.FILLED is equal to -1 and is the reason of drawing contours filled.
you should specify a value is greater than zero like thickness=1 , thickness=2 etc.

Related

Extract dark contour

I want to extract the darker contours from images with opencv. I have tried using a simple threshold such as below (c++)
cv::threshold(gray, output, threshold, 255, THRESH_BINARY_INV);
I can iterate threshold lets say from 50 ~ 200
then I can get the darker contours in the middle
for images with a clear distinction such as this
here is the result of the threshold
but if the contours near the border, the threshold will fail because the pixel almost the same.
for example like this image.
What i want to ask is there any technique in opencv that can extract darker contour in the middle of image even though the contour reach the border and having almost the same pixel as the border?
(updated)
after threshold darker contour in the middle overlapped with border top.
It makes me fail to extract character such as the first two "SS".
I think you can simply add a edge preserving smoothing step to solve this :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
Mat filteredImg;
bilateralFilter(inputImg, filteredImg, 5, 60, 20);
// compute laplacian
Mat laplaceImg;
Laplacian(filteredImg, laplaceImg, CV_16S, 1);
// threshold
Mat resImg;
threshold(laplaceImg, resImg, 10, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
This will give you the following result : result
Regards,
I think using laplacian could partialy solve your problem :
// read input image
Mat inputImg = imread("test2.tif", IMREAD_GRAYSCALE);
// compute laplacian
Mat laplaceImg;
Laplacian(inputImg, laplaceImg, CV_16S, 1);
Mat resImg;
threshold(laplaceImg, resImg, 30, 1, THRESH_BINARY);
// write result
imwrite("res2.tif", resImg);
Using this code you should obtain something like :
this result
You can then play with final threshold value and with laplacian kernel size.
You will probably have to remove small artifacts after this operation.
Regards

How to detect room borders with OpenCV

I want to clear a floor plan and detect walls. I found this solution but it is quite difficult understand the code.
Specially this line (how does it remove texts and others objects inside rooms?)
DeleteSmallComponents[Binarize[img, {0, .2}]];
https://mathematica.stackexchange.com/questions/19546/image-processing-floor-plan-detecting-rooms-borders-area-and-room-names-t
img = Import["http://i.stack.imgur.com/qDhl7.jpg"]
nsc = DeleteSmallComponents[Binarize[img, {0, .2}]];
m = MorphologicalTransform[nsc, {"Min", "Max"}]
How can I do the same with OpenCV?
In opencv there is slightly different approach to process images. In order to do some calculation you have to think in more low-level way. By low-level I mean thinking in basic image processing operations.
For example, line you showed:
DeleteSmallComponents[Binarize[img, {0, .2}]];
Could be expressed in opencv by algorithm:
binarize image
morphological open/close or simple dilation/erosion (based on what is color of objects and background):
cv::threshold(img, img, 100, 255, CV_THRESH_BINARY);
cv::dilate(img, img, cv::Mat());
cv::dilate(img, img, cv::Mat());
Further you can implement your own distance transformation, or use for example hit-and-miss routine (which as being basic is implemented in opencv) to detect corners:
cv::Mat kernel = (cv::Mat_<int>(7, 7) <<
0, 1, 0,0,0,0,0,
-1, 1, 0,0,0,0,0,
-1, 1, 0,0,0,0,0,
-1,1,0,0,0,0,0,
-1,1,0,0,0,0,0,
-1,1,1,1,1,1,1,
-1,-1,-1,-1,-1,-1,0);
cv::Mat left_down,left_up,right_down,right_up;
cv::morphologyEx(img, left_down, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 1);
cv::morphologyEx(img, right_down, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 0);
cv::morphologyEx(img, right_up, cv::MORPH_HITMISS, kernel);
cv::flip(kernel, kernel, 1);
cv::morphologyEx(img, left_up, cv::MORPH_HITMISS, kernel);
and then you will have picture like this:
One more picture with bigger dots (after single dilation):
Finally you can process coordinates of corners found to determine rooms.
EDIT: for images with "double wall lines" like:
We have to "merge" double wall lines first, so code will be visible like this:
cv::threshold(img, img, 220, 255, CV_THRESH_BINARY);
cv::dilate(img, img, cv::Mat()); //small object textures
cv::erode(img, img, cv::getStructuringElement(CV_SHAPE_RECT, cv::Size(5, 5)),cv::Point(-1,-1),2);
cv::dilate(img, img, cv::getStructuringElement(CV_SHAPE_RECT, cv::Size(5, 5)), cv::Point(-1, -1), 3);
And result image:
Sadly, if image properties change you will have to slightly change algorithm parameters. There is posibility to provide general solution, but you have to determine most of possible variants of problem and it will be more complex.

Remove background in opencv to make text more clear

I am trying to create an app that can read text from image. But i'm having problem in clearing background. I want results like :
Input Image 1 :
Output Image 1 :
This is the code I have tried:
cvtColor(org, tmp, CV_BGR2GRAY);
normalize(tmp, tmp, 0, 255, NORM_MINMAX);
threshold(tmp, dst, 0, 255, CV_THRESH_OTSU);
The lines that interest you are oriented at either 0 or 90 degrees, with a small variance in either direction. Lines in the background patterns are slanted. You can identify the lines with the canny algorithm, then check orientation. You'll be left with some gaps where the vertical and horizontal lines meet, depending on the font. Then return to the original image and use a watershed based on color, or use connected components, or whatever to avoid losing those connecting regions.

OpenCV: How to get contours of object that is black and white?

I have problems getting the contours of an object in my picture(s).
In order to delete all noise, I use adjustROI() and Canny().
I also tried erode() and dillate(), Laplacian(), GaussianBlur(), Sobel()... and I even found this code snippet to sharpen a picture:
GaussianBlur(src, dst_gaussian, Size(0, 0), 3);
addWeighted(src, 1.5, dst_gaussian, -0.5, 0, dst);
But my result is always the same: My object is filled with black and white colour (like noise on a TV-screen) so that it is impossible to get the contours with findContours() (findContours() finds a million of contours, but not the one of the whole object. I check this with drawContours()).
I use C++ and I load my picture as a grayscale Mat (for Canny it has to be grayscale). My object has a diffenent shape on every picture, but it is always around the middle of the picture.
I either need to find a way to get a better coloured object by image processing - but I don't know what else to try - or a way how to fill the object with colour after image processing (without having it's contours, because this is what I want in the end).
Any ideas are welcome. Thank you in advance.
I found a solution that works in most cases. I fill my object using the probabilistic Hough transform HoughLinesP().
vector<Vec4i> lines;
HoughLinesP(dst, lines, 1, CV_PI/180, 80, 30, 10);
for(size_t i = 0; i < lines.size(); i++)
{
line(color_dst, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8);
}
This is from some sample code OpenCV documentation provides.
After using some edge-detection algorithm (like Canny()), the probabilistic Hough transform finds objects in binary pictures. The algorithm finds lines, which, if drawn, represent the whole object. Of course some of the parameters have to be adapted for some kind of picture.
I'm not sure if this will work on every picture or every object, but in my case, it does.

black lines around image using warpPerspective for stitching in opencv

Im trying to build a mosaic panorama from a video.
I stitched every frame together, but there is a problem in the final image.
I used findHomography for translation matrix, mask , warpPerspective and copy the new warped image into the final image panorama.
I think this is a problem of warpPerspective.Does anybody know a solution how to fix these black lines?
these black vertical lines are the corners of the stitched image.How to remove them?
I solved it.I figured out the corners of the stitched image and I tried to manually edit the mask.I draw some the black lines using this code to mask:
line(mask, corner_trans[0], corner_trans[2], CV_RGB(0, 0, 0), 4, 8);
line(mask, corner_trans[2], corner_trans[3], CV_RGB(0, 0, 0), 4, 8);
line(mask, corner_trans[3], corner_trans[1], CV_RGB(0, 0, 0), 4, 8);
line(mask, corner_trans[1], corner_trans[0], CV_RGB(0, 0, 0), 4, 8);

Resources