Use OpenCV to detect text blocks to send to Tesseract iOS - ios

How can I use OpenCV to detect all the text in an image, I want to to be able to detect "blocks" of texts individually. Then pass the the recognized blocks into tesseract. Here is an example, if I were to scan this I would want to scan the paragraphs separately, not go from left to right which is what tesseract does.
Image of the example

That would be my first test:
Threshold the image to get a black and white image, with the text in black
Erode it until the paragraph converts to a big blob. It may have lots of holes, it doesn't matter.
Find contours and the bounding box
If some paragraphs merge, you should erode less or dilate a little bit after the erode.

Related

How to improve fax document quality?

I've been using tesseract to ocr Iban numbers from fax document which has resolution of 200x200 or 200x100 dpi. Documents are poor in quality. I'm using C#.net. How do I improve fax document and text quality to improve ocr accuracy?
Musa:
Fax images can get sort of tricky. Initially, you could try scaling or re-sizing the off-DPI images in such a way that they corresponds to a square resolution (i.e. - 200x200).
After this, it's a matter of the content that's on the image (the text characters and their appearance). There are a number of image operations you could perform in an attempt to help make the text objects more suitable for recognition:
Erosion: If the text objects appear to be very bold on the image, then you could attempt to erode it to thin them out.
Dilation: The opposite of erosion. Dilation will add pixels to the objects in question. So, if the text is very thin or has small gaps, performing dilation could help.
Handling dot-shading: If the text on the image is actually composed of black & white dots (assuming this is a 1-bit, black and white image), then dilating the image may possibly help with this. Or, converting the image to a higher bit depth, smoothing the pixels with a blur operation, and then thresholding it back down to 1-bit could help to make the text objects solid.
Hope this helps.

Merging with background while thresholding

I am doing a project on License plate recognition system.
But I am facing problem in segmenting license plate characters.
I have tried cvAdaptiveThreshold() with different window sizes,
otsu and niblacks algorithm.
But in most of the cases license plate characters merge with the
background.
Sample images and outputs given below,
In the first image all the license plate characters are connected by a white line in the bottom hence using thresholding algorithm i couldn't extract characters, how can I extract characters from those images... ??
In the second image noise in the background merges with foreground, which connects all the characters together.. How can I segment characters in these types of images..??
Is there any segmentation algorithms which can segment characters in the second image.. ?
Preprocessing: find big black areas on your image and mark it as background.
Do this for example with treshold. Another way might be to use findContours (contourArea to get the size on the result).
This way you know what areas you can colour black after step 1.
Use OTSU (top image, right column, blue title background).
Colour everything you know to be background black.
Use Opening/Closing or Erode/Dilate (not sure which will work better) to get rid of small lines and to refine your results
Alternatively you could make an edge detection and merge all areas that are "close together", like the second 3 in your example. You could check if areas are close together with a distance between the bounding box of your contours.
ps: I don't think you should blur your image, since it seems to be pretty small already.

Sharpening image using OpenCV OCR

I've been trying to work on an image processing script /OCR that will allow me to extract the letters (using tesseract) from the boxes found in the image below.
Following alot of processing, I was able to get the picture to look like this
In order to remove the noise I inverted the image followed by floodfilling and gaussian blurring to remove noise. This is what I ended up with next.
After running it through some threholding and erosion to remove the noise (erosion being the step that distorted the text) I was able to get the image to look like this before running it through tesseract
This, while a pretty good rendering, allows for fairly accurate results through tesseract. Though it sometimes fails because it reads the hash (#) as a H or W. This leads me to my question!
Is there a way using opencv, skimage, PIL (opencv preferably) I can sharpen this image in order to increase my chances of tesseract properly reading my image? OR Is there a way I can get from the third to final image WITHOUT having to use erosion which ultimately distorted the text in the image.
Any help would be greatly appreciated!
OpenCV does has functions like filter2D that convolves arbitrary kernel with given image. In particular you can use kernels that are used for image sharpening. The main question is whether this will improve the results of your OCR library or not. The image is already pretty sharp and the noise in the image is not a result of blur. I never worked with teseract myself, but I am fairly sure that it already does all the noise reduction it could. And 'helping' him in this process may actually have opposite effect. For example any sharpening process tends to amplify noise (as opposite to noise reduction processes that usually are blurring images). Most of computer vision libraries give better results when provided with raw (unprocessed) images.
Edit (after question update):
There multiple ways to do so. The first one that I would test is this: Your first binary image is pretty clean and sharp. Instead of of using morphological operations that reduce quality of letters switch to filtering contours. Use findContours function to find all contours in the image and store their hierarchy (i.e. which contour is inside which). From all the found contours you actually need only the contours on first and second levels, i.e. outer and inner contours of each letter (contours at zero level are the outermost contours). Other contours can be discarded. Among the contours that do belong to first level you can discard those whose bounding box is too small to be a real letter. After those two discarding procedures I would expect that most of the remaining contours are the ones that are parts of the letters. Draw them on white image and run OCR. (If you want white letters on black background you will need to invert the order of vertices in the contours).

Opencv how to ignore small parts on image

I need a little help in Opencv, I´m a beginner and don´t know all functions yet.
I´m trying to do an OCR in my licence plate, it´s an Brazilian plate. So after some image processing like cvCvtColor,cvCanny,cvFindContours and cvDrawContours, I get images like this:
It´s a fake image, I mounted this image because I don´t want to publish my real plate on the web. On my real image, there is only black and white color I painted some parts on this example because I want ignore this parts. Red color it´s a city name, yellow color is a hyphen separator and green color is the hole to fix the plate on car. I need to know if there is a way to ignore this small parts and recognize only big parts, so after this filter i can do my OCR processing. Any help?
I'm not sure if it helps you in other situations but in this situation you can remove small contours using erosion or simply using contourArea for calculating contour's area (and remove contour if it's area is too small).

how to use opencv to get binaryzation image's histogram

for example I have a binaryzation image like this
alt text http://www.iebayer.com/forum/attachments/month_1001/100127142364234cbfc9c9c793.jpg
I want get histogram like this!
alt text http://www.iebayer.com/forum/attachments/month_1001/100127142301dc4f49420b2389.jpg
how to do it use opencv.
While OpenCV does have Histogram functions, I am not sure I would bother using it in this case.
It seems like all you do is split the image in a number of sections and then calculate the amount of white in it. Some kind of funky spatial histogram. I am pretty sure that no OpenCV function to do that directly exists.
So I suggest running through every pixel in each section and count the number of white pixels (and, if the regions are of different sizes, count the black pixels as well). Then it is simply a matter of drawing it which can be done easily using the rectangle drawing function. If you read the documentation you'll find it's really quite informative.

Resources