I'm using tesseract for optical character recognization of images which contain only single character per image but I'm having some problem in recognizing some characters.
I'm cropping the characters from a larger image and am saving them with pbm extension using convert. Here are some sample images :
These are four separate images. Letters 'S', 'P' and 'E' are recognized correctly but the letter 'T' is is not being recognized. There's a problem with few other characters too. Here's how I'm using the tesseract :
tesseract character.pbm stdout -l eng -psm 10
So, is there any way that I can improve the results? May be modifying the image in some way with convert?
Related
I want to Pre Process this image in Apple Ios,
which kind of filters we can apply for this kind of images.
I want to remove double quotes characters before and after numbers numbers as well as last character as i marked in boxes.
I have tried whitelisting
tessrect.charwhitelist="0123456789";
and
tessrect.blacklist="\":";
I have used GPUImage lib for preprocessing.
Hello I'm trying to detect characters on this specific image.
Original Image:
After some preprocessing I've finally got this image
Currently I could detect single characters by using SURF feature detection and matching algorithm.
Also with template matching I could detect single characters.
But with this algorithms I cannot really classify my matches to detect which character is what.
After some research and test I decided to use hog descriptors to extract and then train those descriptors with svm to classify.
When I crop those chars for surf and template matching I manually cropped those chars in photshop so sizes are not equal to each other but Hog algorithm uses a specific width * height. so I have to have every characters with same size.
So my question is since I am going to use those method only for the original image can I manually crop and resize every char for HOG ? or do I have to first detect each character extract them and then prepare those extracted characters for Hog? If the second one what kind of methods would you suggest me to use?
Some text images are not recognized by tesseract.
FOr example consider the following rails image which is not recognized by tesseract
The above image when OCRed, gives no output.
And some images accuracy is not upto the mark.
I am using ruby on rails and to implement tesseract OCR text recognition I am using 'gem tesseract' and some code.
What's the problem and how do I get the output with nice accuracy.
The problem is that Tesseract is meant for images with only text. Results for images like the one you have posted are not guaranteed.
You will need to do some image processing (crop the image to only the text part), and convert the image to black-text-on-white-background.
Tesseract works for images that contains only and only text. But what if there is text and image and we want to get only text to be recognized.
I am using Tesseract for OCR recognition of text from image. Tesseract is giving exact text from the images that are having only text in them. However when I checked the image that contains car and its car number, Tesseract gave different garbled text for the car number. I applied gray scale optimization, threshold and other effects to get the exact text output and to increase the accuracy of the output but it still giving different text mixed with different encoding. For the same, I am looking for other ways to extract such text.
Can anyone know that how to get text from such images using Tesseract OCR or any alternative so that only text part remains in image so that Tesseract can give the exact text in output.
To crop the image is one alternative to get the only text but how to do that using ImageMagick or any other option.
Thanks.
If you know exactly where on the image the text is, you can send along with the image the coordinates of those regions to Tesseract for recognition. Take a look at Tesseract API method TesseractRect or SetRectangle.
I'm building an iOS application (take a picture and run OCR on it) using Tesseract (an OCR library) and it is working very well with well written numbers and characters (using usual fonts).
The problem I am having is that if I try it on a 7-Segment Display, it gives very very bad results.
So my question is: Does anyone know how I can approach this problem? Is there a way for Tesseract to recognize these characters?
I too had great difficulty in getting tesseract to recognize digits from images of LCD displays.
I had some marginal success by preprocessing the images with ImageMagick to overlay a copy of the image on itself with a slight vertical shift to fill in the gaps between segments:
$ composite -compose Multiply -geometry +0+3 foo.tif foo.tif foo2.png
In the end, though, my saving grace was the "Seven Segment Optical Character Recognition" binary: http://www.unix-ag.uni-kl.de/~auerswal/ssocr/
Many thanks to the author, Erik Auerswald, for this code!
I haven't tried OCRing 7-Segment Display, but I suspect that the problem might be caused by the characters not being connected components. Tesseract does not handle disconnected fonts well from my experience.
Simple erosion (image preprocessing) might help by connecting segments, but you would have to test it and play with kernel size to prevent too much distortion.