How to extract the area of the text in a frame? - opencv

I am working a program that extract the area of the given text from one frame using OpenCV. After got it, it should be blur processing in that area.
The text which is displayed in a frame is given, and the text is always horizontal status and the color is white.
But I don't know in where the text is displayed in a frame. Occasionally the position of text is changed.
How can I extract the area(x,y,width,height) of text with OpenCV?
Is there anything tools to do this?
I have attached two samples frames. You can see the 8-length hexadecimal code under the game mark.
sample 1: a case of complex color background
sample 2: a case of single color background
Please advice, thanks.

There are several resources in the web. One way is to detect text by finding close edge elements (link). Summarized you first detect edges in your image with cv::Canny or cv::Sobel or any other edge detection method (try out which works the best in your case). Then you binarize your image with a threshold.
To remove artifacts you can apply a rank filter . Then you merge your letters with morphological operations cv::morphologyEx. You could try a dilation or a closing. The closing will close the space between the letters and merge them together without changing the size too much. You have to play with the kernel size and shape. Know you detect contrours with cv::findContours, do a polygon approximation and compute the bounding rect of your contour.
To detect just the right contour you should test for the right size or so (e.g. if (contours[i].size()>100)). Then you can order your found fields according to this article where it is explained in detail.
This is the code from the first post:
#include "opencv2/opencv.hpp"
std::vector<cv::Rect> detectLetters(cv::Mat img)
{
std::vector<cv::Rect> boundRect;
cv::Mat img_gray, img_sobel, img_threshold, element;
cvtColor(img, img_gray, CV_BGR2GRAY);
cv::Sobel(img_gray, img_sobel, CV_8U, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
cv::threshold(img_sobel, img_threshold, 0, 255, CV_THRESH_OTSU+CV_THRESH_BINARY);
element = getStructuringElement(cv::MORPH_RECT, cv::Size(17, 3) );
cv::morphologyEx(img_threshold, img_threshold, CV_MOP_CLOSE, element); //Does the trick
std::vector< std::vector< cv::Point> > contours;
cv::findContours(img_threshold, contours, 0, 1);
std::vector<std::vector<cv::Point> > contours_poly( contours.size() );
for( int i = 0; i < contours.size(); i++ )
if (contours[i].size()>100)
{
cv::approxPolyDP( cv::Mat(contours[i]), contours_poly[i], 3, true );
cv::Rect appRect( boundingRect( cv::Mat(contours_poly[i]) ));
if (appRect.width>appRect.height)
boundRect.push_back(appRect);
}
return boundRect;
}

Related

Counting erythrocytes

I'm trying to count the number of erythrocytes on a microscope image. These are the smaller cells. (I've tried first using CNN and sliding window, but it was too slow, so I'm looking for a simplier segmentation)
My approach is:
threshold
find and draw all contours filled so that the cells won't have holes,
make distance transform
iterating over all maxima
masking out a current maximum with a circle having the radius of the maximum and storing the maximum position
My problem is, some cells have a "hole" in the middle - bright area similar by the value to background. If I threshold the image, some of the cell-masks become not a circle but a half circle, with the distance-transform values far below expected value.
I've marked the cells having the "holes" on the mask image.
Hov could I close the hole or the circle? Is there a threshold method or trick?
Below is the part of code responsible for cell extraction:
cv::adaptiveThreshold(_imgIn ,th, 255, ADAPTIVE_THRESH_GAUSSIAN_C, (bgblack ? CV_THRESH_BINARY: CV_THRESH_BINARY_INV), 35, 5 );//| CV_THRESH_OTSU);
Mat kernel1 = Mat::ones(3, 3, CV_8UC1);
for (int i=0; i< 5;i++)
{
dilate(th, th, kernel1);
erode(th, th, kernel1);
}
vector<vector<Point> > contours;
findContours(th, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
mask = 0;
for( unsigned int i = 0; i < contours.size(); i++ )
{
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
}
cv::distanceTransform(mask, dist, CV_DIST_L2, 3);
}
double min, max;
cv::Point pmax;
Mat tmp1 = dist.clone();
while (true)
{
cv::minMaxLoc(tmp1, 0, &max, 0, &pmax);
if ( max < 5 )
break;
cv::circle(_imgIn, pmax, 3 , cv::Scalar(0), CV_FILLED );
cv::circle(tmp1, pmax, max , cv::Scalar(0), CV_FILLED );
}
Closing holes
Closing is an important operator from the field of mathematical morphology. Like its dual operator opening, it can be derived from the fundamental operations of erosion and dilation. Like those operators it is normally applied to binary images, although there are graylevel versions. Closing is similar in some ways to dilation in that it tends to enlarge the boundaries of foreground (bright) regions in an image (and shrink background color holes in such regions), but it is less destructive of the original boundary shape. As with other morphological operators, the exact operation is determined by a structuring element. The effect of the operator is to preserve background regions that have a similar shape to this structuring element, or that can completely contain the structuring element, while eliminating all other regions of background pixels.
In Open CV this looks as follows
import cv2 as cv
import numpy as np
img = cv.imread('j.png',0)
kernel = np.ones((5,5),np.uint8)
erosion = cv.erode(img,kernel,iterations = 1)
closing = cv.morphologyEx(img, cv.MORPH_CLOSE, kernel)
Full documentation here.

drawing a line between feature points without using "drawmatches" function

I got the feature points from two consecutive by using different detectors in features2d framework:
In the first frame, the feature points are plotted in red
In the next frame, the feature points are plotted in blue
I want to draw a line between these red and blue (matched) points inside the first frame (the image with red dots). drawmatches function in opencv doesn't help as it shows a window with two consecutive frames next to each other for matching. Is it possible in OpenCV?
Thanks in advance
I guess that you want to visualize how each keypoint moves between two frames. According to my knowledge, there is no built-in function in OpenCV meeting your requirement.
However, as you have called the drawMatches() function, you already have the two keypoint sets (take C++ code as example) vector<KeyPoint>& keypoints1, keypoints2 and the matches vector<DMatch>& matches1to2. Then you can get the pixel coordinate of each keypoint from Point keypoints1.pt and draw lines between keypoints by calling the line() function.
You should be careful that since you want to draw keypoints2 in the first frame, the pixel coordinate may exceed the size of img1.
There is a quick way to get a sense of the keypoints' motion. Below is the result shown by imshowpair() in Matlab:
After I found good matches, I draw the lines by using this code.
(...some code...)
//draw good matches
for( int i = 0; i < (int)good_matches.size(); i++ )
{
printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx );
//query image is the first frame
Point2f point_old = keypoints_1[good_matches[i].queryIdx].pt;
//train image is the next frame that we want to find matched keypoints
Point2f point_new = keypoints_2[good_matches[i].trainIdx].pt;
//keypoint color for frame 1: RED
circle(img_1, point_old, 3, Scalar(0, 0, 255), 1);
circle(img_2, point_old, 3, Scalar(0, 0, 255), 1);
//keypoint color for frame 2: BLUE
circle(img_1, point_new, 3, Scalar(255, 0, 0), 1);
circle(img_2, point_new, 3, Scalar(255, 0, 0), 1);
//draw a line between keypoints
line(img_1, point_old, point_new, Scalar(0, 255, 0), 2, 8, 0);
line(img_2, point_old, point_new, Scalar(0, 255, 0), 2, 8, 0);
}
imwrite("directory/image1.jpg",img_1);
imwrite("directory/image2.jpg",img_2);
(...some code...)
I saved the results to the first (img_1) and next frame (img_2). As you see, I have different results, but the line shape is same. In video homography sample of OpenCV, keypoint tracking seems accurate. They follow this approach:detect keypoints-->compute keypoints-->warp keypoints--> match--> find homography-->draw matches. However, I apply detect keypoints-->compute keypoints-->match-->draw matches.
I am confused if I have to consider homography and perspective (or other things) to see the keypoint movements accurately.
my results for first frame (img_1)
and next frame (img_2)

Error using cvCountNonZero in opencv

I am trying to count the number of non-zero pixels in a contour retrieved from a Canny edged image using openCV (using C). I am using cvFindNextContour to find the subsequent contour retrieved using a contour scanner.
But When I use the cvCountNonZero on the contour, an error shows up:
Bad flag (parameter or structure field) (Unrecognized or unsupported array type)
in function cvGetMat, C:\User\..\cvarray.cpp(2881)
My code is:
cvCvtColor(image, gray, CV_BGR2GRAY);
cvCanny(gray, edge, (float)edge_thresh, (float)edge_thresh*4, 3);
sc = cvStartFindContours( edge, mem,
sizeof(CvContour),
CV_RETR_LIST,
CV_CHAIN_APPROX_SIMPLE,
cvPoint(0,0) );
while((contour = cvFindNextContour(sc))!=NULL)
{
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
printf("%d\n",cvCountNonZero(contour));
cvDrawContours(final, contour, color, CV_RGB(0,0,0), -1, 1, 8, cvPoint(0,0));
}
Any kind of help is highly appreciated. Thanks in advance.
cvCountNonZero(CvArr*) is for finding the number of non zeros in an array or IplImage but not for CvSeq* contour type. That is why the error is coming. Here teh solution to the problem.
CvRect rect = cvBoundingRect( contour, 0);
cvSetImageROI(img1,rect);
cout<<cvCountNonZero(img1)<<endl;
cvResetImageROI(img1);
//where img1 is the binary image in which you find the contours.
The code can be explained in the following way:
1.First make a rectangular region around each contour.
2.Set the image ROI to that particular region.
3.Now use the cvCountNonZero(); function to find the number of non zeros in the Region.
4.Reset the image ROI.
Have a happy coding.

Filling holes inside a binary object

I have a problem with filling white holes inside a black coin so that I can have only 0-255 binary images with filled black coins. I have used a Median filter to accomplish it but in that case connection bridge between coins grows and it goes impossible to recognize them after several times of erosion... So I need a simple floodFill like method in opencv
Here is my image with holes:
EDIT: floodfill like function must fill holes in big components without prompting X, Y coordinates as a seed...
EDIT: I tried to use the cvDrawContours function but it doesn't fill contours inside bigger ones.
Here is my code:
CvMemStorage mem = cvCreateMemStorage(0);
CvSeq contours = new CvSeq();
CvSeq ptr = new CvSeq();
int sizeofCvContour = Loader.sizeof(CvContour.class);
cvThreshold(gray, gray, 150, 255, CV_THRESH_BINARY_INV);
int numOfContours = cvFindContours(gray, mem, contours, sizeofCvContour, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
System.out.println("The num of contours: "+numOfContours); //prints 87, ok
Random rand = new Random();
for (ptr = contours; ptr != null; ptr = ptr.h_next()) {
Color randomColor = new Color(rand.nextFloat(), rand.nextFloat(), rand.nextFloat());
CvScalar color = CV_RGB( randomColor.getRed(), randomColor.getGreen(), randomColor.getBlue());
cvDrawContours(gray, ptr, color, color, -1, CV_FILLED, 8);
}
CanvasFrame canvas6 = new CanvasFrame("drawContours");
canvas6.showImage(gray);
Result: (you can see black holes inside each coin)
There are two methods to do this:
1) Contour Filling:
First, invert the image, find contours in the image, fill it with black and invert back.
des = cv2.bitwise_not(gray)
contour,hier = cv2.findContours(des,cv2.RETR_CCOMP,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contour:
cv2.drawContours(des,[cnt],0,255,-1)
gray = cv2.bitwise_not(des)
Resulting image:
2) Image Opening:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
res = cv2.morphologyEx(gray,cv2.MORPH_OPEN,kernel)
The resulting image is as follows:
You can see, there is not much difference in both cases.
NB: gray - grayscale image, All codes are in OpenCV-Python
Reference. OpenCV Morphological Transformations
A simple dilate and erode would close the gaps fairly well, I imagine. I think maybe this is what you're looking for.
A more robust solution would be to do an edge detect on the whole image, and then a hough transform for circles. A quick google shows there are code samples available in various languages for size invariant detection of circles using a hough transform, so hopefully that will give you something to go on.
The benefit of using the hough transform is that the algorithm will actually give you an estimate of the size and location of every circle, so you can rebuild an ideal image based on that model. It should also be very robust to overlap, especially considering the quality of the input image here (i.e. less worry about false positives, so can lower the threshold for results).
You might be looking for the Fillhole transformation, an application of morphological image reconstruction.
This transformation will fill the holes in your coins, even though at the cost of also filling all holes between groups of adjacent coins. The Hough space or opening-based solutions suggested by the other posters will probably give you better high-level recognition results.
In case someone is looking for the cpp implementation -
std::vector<std::vector<cv::Point> > contours_vector;
cv::findContours(input_image, contours_vector, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
cv::Mat contourImage(input_image.size(), CV_8UC1, cv::Scalar(0));
for ( ushort contour_index = 0; contour_index < contours_vector.size(); contour_index++) {
cv::drawContours(contourImage, contours_vector, contour_index, cv::Scalar(255), -1);
}
cv::imshow("con", contourImage);
cv::waitKey(0);
Try using cvFindContours() function. You can use it to find connected components. With the right parameters this function returns a list with the contours of each connected components.
Find the contours which represent a hole. Then use cvDrawContours() to fill up the selected contour by the foreground color thereby closing the holes.
I think if the objects are touched or crowded, there will be some problems using the contours and the math morophology opening.
Instead, the following simple solution is found and tested. It is working very well, and not only for this images, but also for any other images.
here is the steps (optimized) as seen in http://blogs.mathworks.com/steve/2008/08/05/filling-small-holes/
let I: the input image
1. filled_I = floodfill(I). // fill every hole in the image.
2. inverted_I = invert(I)`.
3. holes_I = filled_I AND inverted_I. // finds all holes
4. cc_list = connectedcomponent(holes_I) // list of all connected component in holes_I.
5. holes_I = remove(cc_list,holes_I, smallholes_threshold_size) // remove all holes from holes_I having size > smallholes_threshold_size.
6. out_I = I OR holes_I. // fill only the small holes.
In short, the algorithm is just to find all holes, remove the big ones then write the small ones only on the original image.
I've been looking around the internet to find a proper imfill function (as the one in Matlab) but working in C with OpenCV. After some reaserches, I finally came up with a solution :
IplImage* imfill(IplImage* src)
{
CvScalar white = CV_RGB( 255, 255, 255 );
IplImage* dst = cvCreateImage( cvGetSize(src), 8, 3);
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contour = 0;
cvFindContours(src, storage, &contour, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
cvZero( dst );
for( ; contour != 0; contour = contour->h_next )
{
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
}
IplImage* bin_imgFilled = cvCreateImage(cvGetSize(src), 8, 1);
cvInRangeS(dst, white, white, bin_imgFilled);
return bin_imgFilled;
}
For this: Original Binary Image
Result is: Final Binary Image
The trick is in the parameters setting of the cvDrawContours function:
cvDrawContours( dst, contour, white, white, 0, CV_FILLED);
dst = destination image
contour = pointer to the first contour
white = color used to fill the contour
0 = Maximal level for drawn contours. If 0, only contour is drawn
CV_FILLED = Thickness of lines the contours are drawn with. If it is negative (For example, =CV_FILLED), the contour interiors are drawn.
More info in the openCV documentation.
There is probably a way to get "dst" directly as a binary image but I couldn't find how to use the cvDrawContours function with binary values.

Hough transformation for iris detection in OpenCV

I wrote the code for hough transformation and it works well. Also I can crop the eye location of a face. Now I want to detect the iris of the crop image with applying the Hough transformation(cvHoughCircle). However when I try this procedure, the system is not able to find any circle on the image.
Maybe, the reason is, there are noises in the image but I don't think it's the reason.
So, how can I detect the iris? I have the code of binary thresholding maybe I can use it, but
I don't know how to do?
If anyone helps I really appreciate it. thx :)
You say that with binary thresold you get an iris that is pure white : that is not what you want to have. Use something like cvCanny in order to get only the edge of the iris.
Are you detecting the edges correctly?
Can you display the binary image and see the iris clearly?
circular hough transforms normally have a radius window (otherwise you are searching a 3d solution space) are you setting the window to a reasonable value?
void houghcircle()
{
//cvSmooth( graybin,graybin, CV_GAUSSIAN, 5,5 );
CvMemStorage* storage = cvCreateMemStorage(0);
// smooth it, otherwise a lot of false circles may be detected
CvSeq* circles = cvHoughCircles( edge, storage, CV_HOUGH_GRADIENT, 5, edge->height/4,1,1,2,50, 70 );
int i;
for( i = 0; i < circles->total; i++ )
{
float* p = (float*)cvGetSeqElem( circles, i);
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), 2, CV_RGB(0,255,0), -1, 2, 0 );
cvCircle( img, cvPoint(cvRound(p[0]),cvRound(p[1])), cvRound(p[2]), CV_RGB(255,0,0), 1, 2, 0 );
cvNamedWindow( "circles", 1 );
cvShowImage( "circles", img );
cvWaitKey();
}
}

Resources