Is it possible to generate a specific
set of font from the below given image
?
My idea is to generate a specific font
for the below given image of text ,by
manually selecting portion of the
image and mapping it to a set of
letter's.Generate the font for this
and then use this font to make it
readable for an OCR.Is generation of
font possible using any open-source
implementation ? Also please suggest
any good OCR's.
Abbyy FineReader 10 gets better than expected results but predictably gets confused when the characters touch.
Your problem is that the line spacing is too small. The descenders of each line overlap the character bounding boxes of the characters in the line directly below. This makes character segmentation almost impossible because the characters are touching and overlapping. The number of combinations of overlapping characters is virtually impossible to train for. The 'g' and 'y' characters are the worst offenders.
A double line spaced version of this would probably OCR reasonably well.
A custom solution that segmented and separated the each line along with a good dictionary would definitely improve the results. There would still be some errors to correct manually though. The custom routine would have to deal with the ascenders and descenders and try and segment the image into lines which can then be fed to a decent OCR engine. One way would be to analyse every character blob on the page and allocate it to a line. Leptonica (www.leptonica.com - C Imaging Library) would probably make this job a little easier.
I would not try this without increasing the resolution to 200 or 300 dpi first.
With this custom solution, training a font becomes an option if the OCR engine does a poor job initially.
Abbyy (www.abbyy.com) or Google Tesseract OCR 3.00 would be a good place to start.
No guarantees as to whether all of this will work though. This is quite a difficult page to OCR and you need to work out whether it is better to have it typed up manually overseas. It depends on the number of pages to need to process.
Related
I have a collection of type-written image captions which look like this:
I know that the typewriter is consistent and monospace, with characters measuring 14x22px (as measured from the top of a capital letter to the bottom of a descender).
Tesseract is producing output like this:
The results are mostly good when Tesseract has detected the correct bounding boxes for the letters. But there are many strings of letters which are clumped together (e.g. "Ea", "tree", "fr" and "om" on the first line). These are always transcribed incorrectly and account for the majority of errors.
This is frustrating because I know a priori that all the characters are of a particular size. Is it possible pass this knowledge on to the tesseract command line tool?
My command to generate the box file is:
tesseract foo.jpg foo batch.nochop makebox
If possible, I'd prefer to avoid training Tesseract on the font—I don't have any manually transcribed samples, so building a corpus of training data would require some effort.
I'm not sure that Tesseract throws connected characters completely off as Noremac said.
Actually I think that it includes a chopping of joined characters whenever the result of a word detection is unsatisfactory, as explained in the paragraph 4.1 of An Overview of the Tesseract OCR Engine
And I also think that once it finds a fixed pitch text, it should automatically chop the text, even if the characters are connected (look at figure 2 of the same paper).
I know that it's a little bit late to add this answer, but maybe it will help some future visitors!
The issue isn't the font size as much as it is with the letters connecting. If you zoom in on the above images with a program that will show the actual pixels (rather than blurring them together) you can see that those grouping two characters are actually connected. tessearctOCR is completely based on connected components so if they are connected at all then it throws it completely off. I see a couple of options:
If possible, give it a higher resolution image where there is more separation between the characters
Adjust the preprocessing to do a more strict threshold.
I noticed that the pixel connecting the E and the a on the first occurrence is lighter so adjusting the threshold will remove that connection. However, this could affect more than what you want, such as disjointing characters where you don't expect.
For updating the thresholding consider this: https://groups.google.com/forum/#!topic/tesseract-ocr/JRwIz3xL45U
I do have few images. Some of the images contains text and few other doesn't contains text at all. I want a robust algorithm which can conclude if image contains text or not.
Even Probabilistic Algorithms are fine.
Can anyone suggest such algorithm?
Thanks
There are a some specifics that you'll want to pin down:
Will there be much text in the image? Or just a character or two?
Will the text be oriented properly? Or does rotation also need to be performed?
How big will you expect the text to be?
How similar to text will be background be?
Since images can vary significantly you want to define the problem and find as many constraints as you can to make the problem as simple as possible. It's a difficult problem.
For such an algorithm you'll want to focus on what makes text unique from the background (consistent spacing between characters and lines, consistent height, consistent baseline, etc. There's an area of research in "text detection" that you'll want to investigate and you'll find a number of algorithms there. Two surveys of some of these methods can be found here and here
all I can find in the web is about OCR but I'm not there yet, I still have to recognize where the letters are in the image.
any help will be appreciated
The interesting thing is that the answer is not that simple as it may seem. Some may think that locating characters on the picture is first step of OCR, but it is not the case. Actually, you won't be sure where each character is located until you actually finish with recognizing.
The way it works completely depends on the type of image you are going to recognize. First you should segment you image on text areas (blocks) and everything other.
Just few examples:
If you are recognizing license plate on car picture, you should first locate license plate, and only then split it to separate characters.
If you are recognizing some application form, you can locate areas where text is just by knowing it's layout
If you are recognizing scan of book page, you have to distinguish pictures from text areas and then work only on text.
Starting from this moment you don't need original image any more, all you need is binarized image of text block. All OCR alorithms work on binary images. You may need also doing other kind of image transformations like line straightening, perspective correction, skew correction and so on - all that again depends on type of images you are recognizing.
Once text block is found and normalized, you should go further and find lines of text on the text block. In trivial case of horisontal lines of text it is quite simple by creating pixel histogram by horisontal lines.
Now, when you have lines, you may think that now it is simple, you can split it to characters, huray! Again, it is wrong. There are such phenomena as connected characters, broken characters and even ligatures (two letters forming one single shape), or letter that have their parts go further to the right above or bellow next character. What you should do is to create several hipotesis of splitting line to words and individual characters, then try OCR every single variant, weight every hypotesis with confidence level. Last step would be checking different paths in this graph using dictionary and selecting best one.
And only now, when you actually recognized everything, you can say where individual characters are located.
So, simple answer is: recognize your image with OCR program, and get coordinates of charaters from it's output.
Generally speaking you'll be looking for small contiguous areas of nearly solid color. I would suggest sampling each pixel and building an array of nearby pixels that also fall within a threshold of the original pixels color (repeat for neighbours of each matching pixel). Put the entire array aside as a potential character (or check it now) and move on (potentially ignoring previously collected pixels for a speedup).
Optimisations are possible if you know in advance the font-size, quality and/or color of the text. If not you'll want to be fairly generous with your thresholds of what constitutes a "contiguous area".
I am searching for a good character extraction method,
or sometimes it is called stroke-model or stroke filter.
So, I;ve seen many papers, but they all take a long time for understanding and implementation,
I want to ask if someone knows some good source codes or demos?
Also I want to get some kind of full overview of methods available on these theme : character extraction from images, (grayscale).
The main problem is to get a regions of image that include only characters and then some binarization can be made. After that the feature extraction is done (actually OCR works then).
Maybe GNU Ocrad can be interesting? I haven't looked at the source though.
An area with characters is recognized by a large number of sharp edges. There will be some preferential directions, but this is not as strong as you'd see with box shapes.
You seem to assume that it is possible to get "regions of image that include only characters". This is too optimistic. Just look at this very page. There are symbols mixed in with text. And above this editing box, the first four toolbuttons are B, I, a globe and ". Five, if you count the thin divider bar | after the I
I have to extend an OpenGL-Rendering system to support international characters (especially Hebrew, Arabic and cyrillic).
Development platform is Windows(XP|Vista|7), alas using Embercardero Delphi 2010.
I currently use wglOutLineFont(...) to build my font's display list and glCallLists(length(m_Text), UNSIGNED_SHORT, PWchar(m_Text) ) to render my strings.
While this is feasible for Latin-1 Characters, building the full Unicode character set in advance is pretty time-consuming (about 8.5 minutes on my machine), so I am looking for a more efficient solution. I thought about limiting the range from u+0020 - u+077f (Latin, Greek, Cyrillic, Arabic and Hebrew) to include just the glyphs I need, but that would just be a solution for my current needs, and will become insufficient once other encoding is needed.
On the upside, I do not have to worry about left-to right or right-to left direction as our application can handle this already.
I would expect this to be a well-known problem, so I would like to ask if there is any reference material on this on the web, or if you could share some insight on this?
Edit
A clarification: I use a polygonal font representations. Each Font is constructed at unit size (1.0) in advance and scaled appropriately using glScalef(...) before rendering. I did decide against pre-rasterizing since the users might zoom in quite closely (The application is used for CAD), so rastering artifacts would become visible.
Additionally, since a scene seldom exceeds more then a few hundred characters (mainly labels and measurements), the speed gain from pre-rasterization is negligible.
Don't pre-build the display lists :- create an intermediate sprite that builds the lists on demand, and caches them. Trying to pre-compute lists - or pre generate rasterized textures at every font size, font face, and for all characters, is impractical, Especially when you scale to far eastern character sets.
You need to replace the wglOutLineFont.
To do that, generate/render to texture the required glyphs using the wglOutLineFont, and then save the texture into a raster image file. Once application loads, it needs to load the texture image and the glyph texture coordinates (4 coords for each glyph), and to generate the display lists (one list for each glyph, each display list shall draw a single glyph as textured quad).
Each short representing a glyph shall have a corresponding display list (their value much match, and glListBase can aid in this).
I suppose loading a texture is faster than generating font display lists at runtime. Pratically you move offline the glyph raster computation. But the display list generation can be heavy (many glyphs). Indeed you can run in a separated thread the display list generation or generate only the display lists required by your needs.
I've had good luck transliterating this tutorial into C++, though I'm not sure how well it will transfer to Delphi.