How can I segment if the characters are connected? I just tried using watershed with distance transform (http://opencv-code.com/tutorials/count-and-segment-overlapping-objects-with-watershed-and-distance-transform/) to find the number of components but it seems that it does not perform well.
It requires the object to be separated after a threshold in order to perform well.
Having said so, how can I segment the characters effectively? Need helps/ideas.
As attached is the example of binary image.
An example of heavily connected.
Ans:
#mmgp this is my o/p
I believe there are two approaches here: 1) redo the binarization step that led to these images you have right now; 2) consider different possibilities based on image size. Let us focus on the second approach given the question.
In your smallest image, only two digits are connected, and that happens only when considering 8-connectivity. If you handle your image as 4-connected, then there is nothing to do because there are no two components connected that should be separated. This is shown below. The right image can be obtained simply by finding the points that are connected to another one only when considering 8-connectivity. In this case, there are only two such points, and by removing them we disconnect the two digits '1'.
In your other image this is no longer the case. And I don't have a simple method to apply on it that can be applied on the smaller image without making it worse. But, actually, we could consider upscaling both images to some common size, using interpolation by nearest neighbor so we don't move from the binary representation. By resizing both of your images so they width equal to 200, and keeping the aspect ratio, we can apply the following morphological method to both of them. First do a thinning:
Now, as can be seen, the morphological branch points are the ones connecting your digits (there is another one at the left-most digit 'six' too, which will be handled). We can extract these branch points and apply a morphological closing with a vertical line of 2*height+1 (height is from your image), so no matter where the point is, its closing will produce a full vertical line. Since your image is not so small anymore, this line doesn't need to be 1 point-wide, in fact I considered a line that is 6 points-wide. Since some of the branch points are horizontally close, this closing operation will join them in the same vertical line. If a branch point is not close to another, then performing an erosion will remove a vertical line. And, by doing this, we eliminate the branch point related to the digit six at left. After applying these steps, we obtain the following image at left. Subtracting the original image from it, we get the image at right.
If we apply these same steps to the '8011' image, we end with the exactly same image as we started with. But this is still good, because applying the simple method that remove points that are only connected in 8-connectivity, we obtain the separated components as before.
It is common to use "smearing algorithms" for this. Also known as Run Length Smoothing Algorithm (RLSA). It is a method that segments black and white images into blocks. You can find some information here or look around on the internet to find an implementation of the algorithm.
Not sure if I want to help you solve captchas, but one idea would be to use erosion. Depending on how many pixels you have to work with it might be able to sufficiently separate the characters without destroying them. This would likely be best used as a pre-processing step for some other segmentation algorithm.
Related
I am working on image registration between LWIR & RGB images. I am able to extract the edges from both images.
RGB_Edges, LWIR_Edges
Now, I want to match the edges of these images to calculate homography.
I tried to match each edge of RGB with LWIR image separately using template matching (OpenCV) but it didn't worked.
Therefore, can anyone please suggest some methods to mach the edges/structures from both images that can be helpful to compute homography?
I will really appreciate any suggestion/help.
Thanks.
These two images are already fairly well aligned.
Due to the large thickness and irregularity of the edges, I doubt you can do much better.
If you have the option of operator supervision, point at corresponding points in the two images (four pairs are enough for an homography).
For an automated approach, you can try to thin the strokes then to find (approximate) line segments in both images. For a certain number of segments in one image, find the segment which is (approximately) parallel, close and facing with a significant overlap in the other. You can expect that these segments are in correspondence.
Next, you can you can obtain corresponding points by forming the intersections between some segments in each image (take segments that are close but as perpendicular as possible).
As this procedure will suffer from outliers, model fitting by RANSAC is probably a good option.
I'm going to do a specialized OCR system which recognizes the above dotted numbers. (The sample picture may not contain all special cases - see below. ) We decided to separate the number string and recognize each digit before we put them altogether to form a final result.
The question is:
How to clearly separate all digits with OpenCV or other image algorithms?
Our difficulty lies in:
1. The image I uploaded is a synthesized image, which was produced using handpicked digits with slight morph in order to simulate anomalies in actual use, e.g. some dots are linked as a whole, some dots are eroded, and some dots are biased. We failed using morphology to determine their contours.
2. However, sometimes the digit may skew too much like italics with kerning, making a "clean and complete" bounding box impossible.
Some of the ideas we thought of are:
1. Find a way to draw slanted lines to separate the digits instead of traditional vertical lines. We assume that these dotted numbers should have been straight-up monospace characters, and only shear will occur instead of rotation.
2. If there are any method better than simple morphology that could link the dots of each number together and manage to keep dots of separate digits away, it will also be useful.
EDIT: Please don't comment below the original question. Just submit your answer. I appreciate every help by you, no matter how simple your answer may seem to be.
EDIT: Since the image I provided is somewhat ideal for real situation, a simple morphological operation won't solve the problem. Also, I'm looking for a solution which separates the characters, and linking the dots together is not the only option.
I am a relative newcomer to image processing and this is the problem I'm facing - Say I have the image of an application form, like this:
Now I would like to detect the locations of all the locations where data is to be entered. In this case, it would be the rectangles divided into a number of boxes like so(not all fields marked):
I can live with the photograph box also being detected. I've tried running the squares.cpp sample in the OpenCV sources, which does not quite get me what I want. I also tried the modified version here - the results were worse(my use case is definitely very different from the OP's in that question).
Also, Hough transforming to get the lines is not really working with/without blur-threshold as the noise in scanned image is contributing to extraneous lines, and also, thresholding is taking away parts of the combs(the small squares), and hence the line detection is not up to the mark.
Note that this form is not a scanned copy of a printed form, but the real input might very well be a noisy, scanned image of a printed form.
While I'm definitely sure that this is possible(at least with some tolerance allowed) and I'm trying to get at the solution, it would be really helpful if I get insights and ideas from other people who might have tried something like this/enjoy hacking on CV problems. Also, it would be really nice if the answers explain why a particular operation was done (e.g., dilation to try and fill up any holes left by thresholding, etc)
Are the forms consistent in any way? Are the "such boxes" the same size on all forms? If you can rely on a consistent size, like the character boxes in the form above, you could use template matching.
Otherwise, the problem seems to be: find any/all rectangles on the image (with a post processing step to filter out any that have a significant amount of markings within, or to merge neighboring rectangles).
The more you can take advantage of the consistencies between the forms, the easier the problem will be. Use any context you can get.
EDIT
Using the gradients (computed by using a Sobel kernel in both the x and the y direction) you can weed out a lot of the noise.
Using both you can find the direction of the gradients (equation can be found here: en.wikipedia.org/wiki/Sobel_operator). Let's say we define a discriminating feature of a box to be a vertical or horizontal gradient. If the pixel's gradient has an orientation that's either straight horizontal or straight vertical, keep it, set all else to white.
To make this more robust to noise, you can use a sliding window (3x3) in which you compute the median orientation. If the median (or mean) orientation of the window is vertical or horizontal, keep the current (middle of the window) pixel, otherwise set it to white.
You can use OpenCV for the gradient computation, and possibly the orientation/phase calculation, but you'll probably need to write the code it do the actual sliding window code. I'm not intimately familiar with OpenCV
So, my problem is that I have to find common points between two images of a microchip. Here's an example of two images:
Between these two images, we can clearly see some common pattern like the wires on the bottom right of the first images that can be found in relatively the same place in the second image. Also, the sort of white Z shape in the first image can be seen in the second images, a bit harder, but it's there.
I tried to match them with SURF (OpenCV), found no common point at all. Tried to apply some filter on both images, like edge detection, thresholding, and other filter that I could found in GIMP, but whatever I tried, no common point were ever found.
I'd like to know if you have any idea to solve this problem ? My suggestion right now would be to manually match key features in both images with line segments, but preferably, it should be automated.
A solution that uses OpenCV would be preferable, but I'm looking for any suggestion possible. In OpenCV, all pattern matching situation that I saw were problems way more obvious that this one. No difference in color and so on.
Unless realtime is required, do a simple approach to test if rotation can be automated:
Circuit boards like the ones in the images, are often based on perpendicular straight line segments. Hence you can "despeckle" and remove stuff like coffee stains, by finding linesegments.
Think about creating a kernel, that have a line with dark pixels on one side, and bright pixels on the other. Fold it on the image (or cross-correlate it) to identify all pixels that have a sequence of bright/dark pixels which are nearly vertical or horizontal.
you may interlace to speed things up.
edges of stains and speckles may survive this, if you want angles close to 45* representatations!
The resulting image can be interpreted as a sparse pointcloud.
You can now use RANSAC or other similar approaches to describe many of the remaining correlations, as line segments.
* use a 2 point line segment as input model for RANSAC, Degrade if small.
* Determine infinite lines that have many inliers
* use growth or binninng approaches to segmentate lines.
benefits:
high likelyhood of line segment representations that are actually present as circuitry in image. 2 point description of segments, possible transforms are easy.
easy interpretation of data, as it can be overlayed in openCV
Rotation should be easily found as the rotation that matches most found lines to horizontal and/or vertical axis'es.
apply rotation.
repeat for both images.
now you can determine best translation between the images, by simple x,y cross correlation.
If the top image is always of that quality (quasi bilevel patterns, easy edge detection), I would try a good geometric matching algorithm (such as Cognex or Halcon), training with the top image and searching the bottom one.
Maybe it is worth to first compensate rotation (I hope there is no scaling). You would do that by determining the dominant edge direction, possibly using a Hough transform. Or, much better, by careful mechanical alignment of the sensors.
Anyway, chances of success are low, this is a difficult problem.
I have 55 000 image files (in both JPG and TIFF format) which are pictures from a book.
The structure of each page is this:
some text
--- (horizontal line) ---
a number
some text
--- (horizontal line) ---
another number
some text
There can be from zero to 4 horizontal lines on any given page.
I need to find what the number is, just below the horizontal line.
BUT, numbers strictly follow each other, starting at one on page one, so in order to find the number, I don't need to read it: I could just detect the presence of horizontal lines, which should be both easier and safer than trying to OCR the page to detect the numbers.
The algorithm would be, basically:
for each image
count horizontal lines
print image name, number of horizontal lines
next image
The question is: what would be the best image library/language to do the "count horizontal lines" part?
Probably the easiest way to detect your lines is using the Hough transform in OpenCV (which has wrappers for many languages).
The OpenCV Hough tranform will detect all lines in the image and return their angles and start/stop coordinates. You should only keep the ones whose angles are close to horizontal and of adequate length.
O'Reilly's Learning OpenCV explains in detail the function's input and output (p.156).
If you have good contrast, try running connected components and analyze the result. It can be an alternative to finding lines through Hough and cover the case when your structured elements are a bit curved or a line algorithm picks up the lines you don’t want it to pick up.
Connected components is a super fast, two raster scan algorithm and will give you a mask with all you connected elements in it marked with different labels and accounted for. You can discard anything short ( in terms of aspect ratio). Overall, this can be more general, faster but probably a bit more involved than running Hough transform. The Hough transform on the other hand will be more tolerable for contrast artifacts and even accidental gaps in lines.
OpenCV has the function findContours() that find components for you.
you might want to try John' Resig's OCR and Neural Nets in Javascript