Connected Character segmentation in OpenCV - opencv

What is a good method to segment characters that are united as in the following figure, knowing that:
characters have this font, but the font size varies based on the image size
only isolated groups of characters from the image are connected
Also, how can i detect if in a given bounding box, there are 2 or more letters which are connected?
I tried with checking for width > height for detecting connected characters but it doesn't work for the blue groups in the image.
I also tried a segmentation method based on:
Article section 3.4
for separating characters but got poor results.

IDEA: if you have a good ocr already, you can try to apply ocr all these connected components (or contours). If ocr cant detect a letter; than there is not 1 letter, there are 2 or more.
IDEA: check convexity defects of these connected components, the closest defect points are where the bridges are.
IDEA: use a kernel having small width & big height for erosion+dilation (morphological opening)
IDEA: take y-derivative of the image. The smallest contours (or lines) left will be your bridges. Mark them and erase those pixels from the original image.
IDEA: search problem approach: Take 2 letters from alphabet (and this font), connect them horizontally with some tool and use matchShapes method (moment match) of opencv to find if that shape matches with your connected component. Or try to implement autocorrelation.
good luck.

Related

How to get position (x,y) and number of particular objects or shape in a handdrawing image?

first, I've learning just couple of week about image processing, NN, dll, by myself, so I'm really new n really far to pro. n sorry for my bad english.
there's image or photo of my drawing, I want to get the coordinates of object/shape (black dot) n the number around it, the number indicating the sequence number of dot.
How to get it? How to detect the dots? Shape recognition for the dots? Number handwriting recognition for the numbers? Then segmentation to get the position? Or use template matching? But every dot has a bit different shape because of hand drawing. Use neural network? in NN, the neuron is usually contain every pixel to recognize an character, right? can I use an picture of character or drawing dot contained by each neuron to recognize my whole picture?
I'm very new, so I'm really need your advice, correct me if I wrong! Please tell me what I must learn, what I must do, what I must use.
Thank you very much. :'D
This is a difficult problem which can't be solved by a quick solution.
Here is how I would approach it:
Get a better picture. Your image is very noisy and is taken in low light with high ISO. Use a better camera and better lighting conditions so you can get the background to be as white as possible and the dots as black as possible. Try to maximize the contrast.
Threshold the image so that all the background is white and the dots and numbers are black. Maybe you could apply some erosion and/or dilation to help connect the dark edges together.
Detect the rectangle somehow and set your work area to be inside the rectangle (crop the rest of the image so that you are left with the area inside the rectangle). You could do this by detecting the contours in the image and then the contour that has the largest area is the rectangle (because it's the largest object in the image). Of course, this is not the only way. See this: OpenCV find contours
Once you are left with only the dots, circles and numbers you need to find a way to detect them and discriminate between them. You could again find all contours (or maybe you've found them all from the previous step). You need to figure out a way to see if a certain contour is a circle, a filled circle (dot) or a number. This is a problem in it's own. Maybe you could count the white/black pixels in the contour's bounding box. Dots have more black pixels than circles and numbers. You also need to do something about numbers that connect with dots (like the number 5 in your image)
Once you know what is a dot, circle or number you could use an OCR library (Tesseract or any other OCR lib) to try and recognize the numbers. You could also use a neural network library (maybe trained with the MNIST dataset) to recognize the digits. A good one would be a convolutional neural network similar to LeNet-5.
As you can see, this is a problem that requires many different steps to solve, and many different components are involved. The steps I suggested might not be the best, but with some work I think it can be solved.

Segmentation for connected characters

How can I segment if the characters are connected? I just tried using watershed with distance transform (http://opencv-code.com/tutorials/count-and-segment-overlapping-objects-with-watershed-and-distance-transform/) to find the number of components but it seems that it does not perform well.
It requires the object to be separated after a threshold in order to perform well.
Having said so, how can I segment the characters effectively? Need helps/ideas.
As attached is the example of binary image.
An example of heavily connected.
Ans:
#mmgp this is my o/p
I believe there are two approaches here: 1) redo the binarization step that led to these images you have right now; 2) consider different possibilities based on image size. Let us focus on the second approach given the question.
In your smallest image, only two digits are connected, and that happens only when considering 8-connectivity. If you handle your image as 4-connected, then there is nothing to do because there are no two components connected that should be separated. This is shown below. The right image can be obtained simply by finding the points that are connected to another one only when considering 8-connectivity. In this case, there are only two such points, and by removing them we disconnect the two digits '1'.
In your other image this is no longer the case. And I don't have a simple method to apply on it that can be applied on the smaller image without making it worse. But, actually, we could consider upscaling both images to some common size, using interpolation by nearest neighbor so we don't move from the binary representation. By resizing both of your images so they width equal to 200, and keeping the aspect ratio, we can apply the following morphological method to both of them. First do a thinning:
Now, as can be seen, the morphological branch points are the ones connecting your digits (there is another one at the left-most digit 'six' too, which will be handled). We can extract these branch points and apply a morphological closing with a vertical line of 2*height+1 (height is from your image), so no matter where the point is, its closing will produce a full vertical line. Since your image is not so small anymore, this line doesn't need to be 1 point-wide, in fact I considered a line that is 6 points-wide. Since some of the branch points are horizontally close, this closing operation will join them in the same vertical line. If a branch point is not close to another, then performing an erosion will remove a vertical line. And, by doing this, we eliminate the branch point related to the digit six at left. After applying these steps, we obtain the following image at left. Subtracting the original image from it, we get the image at right.
If we apply these same steps to the '8011' image, we end with the exactly same image as we started with. But this is still good, because applying the simple method that remove points that are only connected in 8-connectivity, we obtain the separated components as before.
It is common to use "smearing algorithms" for this. Also known as Run Length Smoothing Algorithm (RLSA). It is a method that segments black and white images into blocks. You can find some information here or look around on the internet to find an implementation of the algorithm.
Not sure if I want to help you solve captchas, but one idea would be to use erosion. Depending on how many pixels you have to work with it might be able to sufficiently separate the characters without destroying them. This would likely be best used as a pre-processing step for some other segmentation algorithm.

OpenCV: comparing simple images with small difference

I have a bunch of "simple" images and I want to compare if they are similar together. I compare them to each other using template matching (cv::matchTemplate) and results are quite good.
Now I want to fine tune my program and I face a problem. For example I have two images which look very much alike. Only differences they have is that another one has thicker line and the digit front of item is different. When both images are small, one pixell difference in line thickness makes big result differences when doing template matching. When line thicknesses are same and only difference is the front digit, I get template matching result something like 0.98 with CV_TM_CCORR_NORMED when match successful. When line thickness is different matching result is something like 0.95.
I cannot decrease my threshold value below 0.98 because some other similar images have same line thickness.
Here are example images:
So what options do I have?
I have tried:
dilate the original and template
erode also both
morphologyEx both
calculating keypoints and comparing them
finding corners
But no big success yet. Are those images too simple that detecting "good features" is hard?
Any help is very wellcome.
Thank you!
EDIT:
Here are some other example images. What my program consider as similar are put in same zip-folder.
ZIP
A possible way might be thinning the two images, so that every line is of one pixel width, since the differing thickness is causing you the main problem with similarity.
The procedure would be to first binarize/threshold the images, then apply a thinning operation on both images, so both are now having the same thickness of 1 px. Then use the usual template matching that you used before with good results.
In case you'd like more details on the thinning/skeletonization of binary images here are a few OpenCV implementations posted on various discussion forums and OpenCV groups:
OpenCV code for thinning (Guo and Hall algo, works with CvMat inputs)
The JR Parker implementation using OpenCV
Possibly more efficient code here (uses OpenCV optimized access methods a lot, however most of the page is in Japanese!)
And lastly a brief overview of thinning in case you're interested.
You need something more elementary here, there isn't much reason to go for fancy methods. Your figures are already binary ones, and their shapes are very similar overall.
One initial idea: consider the upper points and bottom points in a certain image and form a upper hull and a bottom hull (simply a hull, not a convex hull or anything else). A point is said to be an upper point (respec. bottom point) if, given a column i, it is the first point starting at the top (bottom) of the image that is not a background point in i. Also, your image is mostly one single connected component (in some cases there are vertical bars separated, but that is fine), so you can discard small components easily. This step is important for your situation because I saw there are some figures with some form of noise that is irrelevant to the rest of the image. Considering that a connected component with less than 100 points is small, these are the hulls you get for the respective images included in the question:
The blue line is indicating the upper hull, the green line the bottom hull. If it is not apparent, when we consider the regional maxima and regional minima of these hulls we obtain the same amount in both of them. Furthermore, they are all very close except for some displacement in the y axis. If we consider the mean x position of the extrema and plot the lines of both images together we get the following figure. In this case, the lines in blue and green are for the second image, and the lines in red and cyan for the first. Red dots are in the mean x coordinate of some regional minima, and blue dots the same but for regional maxima (these are our points of interest). (The following image has been resized for better visualization)
As you can see, you get many nearly overlapping points without doing anything. If we do even less, i.e. not even care about this overlapping, and proceed to classify your images in the trivial way: if an image a and another image b have the same amount of regional maxima in the upper hull, the same amount of regional minima in the upper hull, the same amount of regional maxima in the bottom hull, and the same amount of regional minima in the bottom hull, then a and b belong to the same class. Doing this for all your images, all images are correctly grouped except for the following situation:
In this case we have only 3 maxima and 3 minima for the upper hull in the first image, while there are 4 maxima and 4 minima for the second. Following you see the plots for the hulls and points of interest obtained:
As you can notice, in the second upper hull there are two extrema very close. Smoothing this curve eliminates both extrema, making the images match by the trivial method. Also, note that if you draw a rectangle around your images, then this method will tell they are all equal. In that case you will want to compare multiple hulls, discarding the points in the current hull and constructing other ones. Nevertheless, this method is able to group all your images correctly given they are all very simple and mostly noisy-free.
From as much as I can get, the difficulty is when the shape is the same, just size is different. A simple hack approach could be:
- subtract the images, then erode. If the shapes were the same but one slightly bigger, subtracting will leave only the edges, which will be thin an vanish with erosion as noise.
Somewhat more formal, would be to take the contours and then the approximate polygons and do a invariants comparison (Hu Moments etc.)

Localization of numbers within a complex scene image

First of all, I very much appreciate the help provided by the experts here at SO. The questions posed by many and answered by the experts has been of immense benefit to me. It had helped me with a very crucial problem few months back when I was a student doing my thesis.
Right now I am working on a problem to detect (and then recognize) numbers in a complex scene image. You can check out these images here: http://imageshack.us/g/823/dsc1757w.jpg/. These are pictures of marathon runners with their numbers on the front of their shirts. I have to detect all the numbers that appear in the image and then recognize them. The recognition wont be difficult as these appear to be OCR friendly characters. The crucial thing is how to detect these numbers.
I had an idea to first color filter it for black color. But when I tried in Matlab, the results were not encouraging, as we can see that many of the regions in the image qualify this criteria (the clothes, some shadows behind the runners, the shadows in the foliage, etc). Either I need to classify these characters from these other regions or need some other good technique.
There are papers available and I have gone through some of them, like the SWT, DWT, etc., but I have a feeling they wont be of much help. I was thinking some kind of training algorithm might be useful. There is another reason for this, in future there might be other photos with possibly different fonts, etc., so I think a dedicated algorithmic approach might fail. Can anyone point me in the right direction?
I am not a novice in image processing, but not an expert either. So, any and all help/suggestion in this regard will be greatly appreciated :) .
Thanks,
MD
You know that your problem is not a simple one, but it seems very interesting!
Although I don't have any solutions for you, I will just share my thoughts in hope that you can make something out of it.
Let's take 2 of your photos as examples:
Photo-A: http://imageshack.us/photo/my-images/59/dsc0275a.jpg/
It shows a single person with a relative "big" green label with numbers in his shirt.
Photo-B: http://imageshack.us/photo/my-images/546/dsc0243u.jpg/
It shows a lot of people with red smaller labels in their shirts.
(The labels' height in pixels is about 1/5 of the label in Photo-A)
Considering the above photos, I will try to write some random thoughts which may help...
(a) Define your scale: There is no point to apply a search algorithm to find labels from 2x2 pixels up-to the full image resolution. You must define the minimum/maximum limits for width & height of a label. Those limits may depend on many different factors:
(1) One factor is the real size of labels (defined by the distance of people from camera) which can be defined as a percentage of the image width & height.
(2) Another factor is the actual reading accurracy of the OCR you are going to use. If the numbers' image height is smaller than Y1 pixels or bigger than Y2 pixels the OCR will not be able to read it (it sounds strange but it's true: big images may seem very clear to the human eye, but an OCR may have problems reading it).
(b) Find the area(s) of interest: In your case, this is equivalent to "Find the approximate position of labels". We can define an athlete label roughly as "An (almost) rectangular area, which may be a bit inclined relative to photo borders, and contains: A central area of black + color C1 [e.g. red or green] + a white (=neutral) area on top and/or bottom of it".
A possible algorithm to find the approximate position of a label is:
(1) Traverse all image left-to-right, top-to-bottom and examine a square area of MinHeight/2 x MinHeight/2
(2) Create the histogram of the square area (or posterize it e.g. to 8 levels) and try to find if there is only Black + Another color C1 in a percentage of e.g. Black: 40% +/- 10, Color: 60% +/- 10%
(3) If (2) is true try to expand the area to Right and Bottom while the percentages are kept in the specified limits
(4) If the square is fully expanded, check if the expanded area size is inside the min/max limits of width/height you specified in (a). If not, go to step 1
(5) Process the expanded area to read the numbers - see (c) bellow
(6) Goto to step 1
(c) Process the area(s) of interest: Try the following steps:
(1) Convert each image-area to Grayscale by applying a color filter that burn Color C1 to white.
(2) Equalize the Grayscale to make the black letters stand-out
(3) If an inclination has been detected, perform a reverse rotation on the image-area to make the letters as horizontal as possible.
(4) Feed the area to an OCR trained only for numbers
Good luck with your project!
You could try to contact the author of this software:
Yaroslav is an active member of StackOverflow.

Image processing / super light OCR

I have 55 000 image files (in both JPG and TIFF format) which are pictures from a book.
The structure of each page is this:
some text
--- (horizontal line) ---
a number
some text
--- (horizontal line) ---
another number
some text
There can be from zero to 4 horizontal lines on any given page.
I need to find what the number is, just below the horizontal line.
BUT, numbers strictly follow each other, starting at one on page one, so in order to find the number, I don't need to read it: I could just detect the presence of horizontal lines, which should be both easier and safer than trying to OCR the page to detect the numbers.
The algorithm would be, basically:
for each image
count horizontal lines
print image name, number of horizontal lines
next image
The question is: what would be the best image library/language to do the "count horizontal lines" part?
Probably the easiest way to detect your lines is using the Hough transform in OpenCV (which has wrappers for many languages).
The OpenCV Hough tranform will detect all lines in the image and return their angles and start/stop coordinates. You should only keep the ones whose angles are close to horizontal and of adequate length.
O'Reilly's Learning OpenCV explains in detail the function's input and output (p.156).
If you have good contrast, try running connected components and analyze the result. It can be an alternative to finding lines through Hough and cover the case when your structured elements are a bit curved or a line algorithm picks up the lines you don’t want it to pick up.
Connected components is a super fast, two raster scan algorithm and will give you a mask with all you connected elements in it marked with different labels and accounted for. You can discard anything short ( in terms of aspect ratio). Overall, this can be more general, faster but probably a bit more involved than running Hough transform. The Hough transform on the other hand will be more tolerable for contrast artifacts and even accidental gaps in lines.
OpenCV has the function findContours() that find components for you.
you might want to try John' Resig's OCR and Neural Nets in Javascript

Resources