Multilingual Unicode rendering in opengl - delphi

I have to extend an OpenGL-Rendering system to support international characters (especially Hebrew, Arabic and cyrillic).
Development platform is Windows(XP|Vista|7), alas using Embercardero Delphi 2010.
I currently use wglOutLineFont(...) to build my font's display list and glCallLists(length(m_Text), UNSIGNED_SHORT, PWchar(m_Text) ) to render my strings.
While this is feasible for Latin-1 Characters, building the full Unicode character set in advance is pretty time-consuming (about 8.5 minutes on my machine), so I am looking for a more efficient solution. I thought about limiting the range from u+0020 - u+077f (Latin, Greek, Cyrillic, Arabic and Hebrew) to include just the glyphs I need, but that would just be a solution for my current needs, and will become insufficient once other encoding is needed.
On the upside, I do not have to worry about left-to right or right-to left direction as our application can handle this already.
I would expect this to be a well-known problem, so I would like to ask if there is any reference material on this on the web, or if you could share some insight on this?
Edit
A clarification: I use a polygonal font representations. Each Font is constructed at unit size (1.0) in advance and scaled appropriately using glScalef(...) before rendering. I did decide against pre-rasterizing since the users might zoom in quite closely (The application is used for CAD), so rastering artifacts would become visible.
Additionally, since a scene seldom exceeds more then a few hundred characters (mainly labels and measurements), the speed gain from pre-rasterization is negligible.

Don't pre-build the display lists :- create an intermediate sprite that builds the lists on demand, and caches them. Trying to pre-compute lists - or pre generate rasterized textures at every font size, font face, and for all characters, is impractical, Especially when you scale to far eastern character sets.

You need to replace the wglOutLineFont.
To do that, generate/render to texture the required glyphs using the wglOutLineFont, and then save the texture into a raster image file. Once application loads, it needs to load the texture image and the glyph texture coordinates (4 coords for each glyph), and to generate the display lists (one list for each glyph, each display list shall draw a single glyph as textured quad).
Each short representing a glyph shall have a corresponding display list (their value much match, and glListBase can aid in this).
I suppose loading a texture is faster than generating font display lists at runtime. Pratically you move offline the glyph raster computation. But the display list generation can be heavy (many glyphs). Indeed you can run in a separated thread the display list generation or generate only the display lists required by your needs.

I've had good luck transliterating this tutorial into C++, though I'm not sure how well it will transfer to Delphi.

Related

OpenGL-ES iOS DrawText on screen drops FPS dramastically

Our team has developed an OpenGL application which draws different polygons on the screen. Additionally we want to create about 1000 different strings to print on the screen. If we do this with the Texture2D class the FPS drops down under 3.
I've already tested Bitmap fonts, which doesn't improve the performance.
Which is the best way in OpenGL iOS to draw a lot of text without loss of performance and without losing quality (text should be scalable)?
Allocating 1000 textures takes up a huge amount of memory and will slow down your app, especially if they are at a high enough resolution for readable text. You should generate these textures as they are needed and free them once they are no longer being displayed. Make sure that you aren't generating and freeing textures each frame, but only as needed.
If you are drawing all 1000 strings in the same scene, you should combine as many as you can into like textures. This will allow you to leverage Cocos2D's TrueType rendering system to keep text high-quality. On the other hand if this is not an option and all 1000 strings need to be distinct from each other, consider building a font rendering system that renders each character as a glyph image. This will reduce the number of textures used from 1000 to about 100 to represent all standard English characters and punctuation. I had to do something similar for a video game with lots of dynamic text in an OpenGL environment, and got good performance out of it. However, I do not recommend it unless it's absolutely necessary since it limits your text to only the glyphs you define and you have to program the formatting yourself.

Reducing Flash file size in KB?

I have a Flash file that I need to reduce the size of.
The reason that I need to reduce its size is that I will need to convert this into an iPhone app.
currently it only has 2 buttons and 2 TLF textfileds on the stage one, layer one and the size of the file is 355KB.
I have also placed the code on layer 2.
is there anyway to reduce the size of it so I won't have problems when publishing and sending for app store?
Thanks
The biggest portion of that file size will be related to TLF. TLF (Text-Layout-Framework) is huge and is generally not recommended on mobile (as it has pretty high cpu usage).
If you're not using any TLF specific features, then it would be wise to change your text fields to use classic text instead (DF3).
Beyond TLF, make sure you're using vector objects instead of bitmaps wherever you can as that will drastically reduce file size. If you are using bitmaps, you can play around with the compression settings to optimize file size further. You can do this globally in the Publish Settings (JPEG Quality) or individually on a graphics properties menu.
One note with Vector graphics and mobile, simple vectors will run ok, but complex vectors will run terribly. Make sure to set cacheAsBitmap = true; on any complex (or even all) vectors to improve performance. OR in FLashPRO, click on a movieClip and in the properties panel, go to the "Display" twirl down, and set cache as bitmap in the Render setting.

Making a "piece of paper with text on it" in OpenGL (Specifically on iOS 5)

I've never done OpenGL, but I'm looking for some pointers on this particular question on an AR app I'm practicing with.
I'd like to make an app with a "flat rectangle" along with text written on the surface of the rectangle. Visually, I'm imagining something along the lines of a piece of paper with text written on it. Each time the app starts, the text would be something different (the text is pulled from a plist file).
The user would be able to view the paper from all sides, much as if there was a piece of paper hanging in front of him.
Is this trivial to do in OpenGL? How could I get started?
Sorry for the really open-ended question, but I wanted to get a feel for how this kind of thing is done.
Looking at the OpenGL template source code in the Xcode sample projects, I see that there is a big array of vertices. I presume that to create a "flat" rectangle, I'd essentally just have to remove or make the z-axis zero. And then the dynamic text that will attach to the surface of the flat rectangle...I dont have any idea how to do that......
This question is hard to answer unambiguously. In general, this is trivial, but then again it is not.
Drawing a "flat rectangle with something on it" is a couple of API calls, as simple as it can get. Drawing text in OpenGL in an efficient way, and high quality, and without big preprocessing is an entirely different story.
What I would do is render text using whatever the "normal system-supported" way is under iOS (just like you would draw in any window, I wouldn't know this specific detail), but draw into a bitmap rather than on the screen. This should be supported, pretty much every OS has supported this for at least 10-15 years. Then turn this bitmap into a texture, bind it, and draw your trivial flat quad with OpenGL (set up a vertex buffer with 4 vertices, each vertex a texture coordinate, and draw two triangles - as easy as it gets).
The huge advantage of that is that you get to use the installed system fonts (or any fonts available), you don't need to generate a bitmap font and don't need to think about really ugly things such as hinting and proper spacing, and it's much easier to mix different text styles, etc. OpenGL has built-in support for text too, of course, but it is not terribly efficient or nice either. If the text does not change every millisecond, it's really best to render it using the standard renderer that the operating system provides (yes, that probably won't be hardware accelerated, but so what... since the user must read the text, it likely won't change every millisecond).
Now it gets more complicated if your "piece of paper" should bend and twist too, or do a page peel effect rather than being just a flat rectangle. In that case you need to tesselate it, which can be harder than it sounds, too. Not all tesselations look optimal for all bends/twists, or they do but do not have the optimal (read as minimum) number of vertices.
There is an article on "page peel" and such tesselation in one of the GPU Gems or GPU Pro books, let me search...
There: Andreas Bizzotto: "A Shader-Based eBook Reader - Page peeling effect", GPU Pro2 pp. 278-299
Maybe you can get hold of a copy or are lucky enough to find it on Google Books or something.

how do I identify letters in an image? (before OCRing)

all I can find in the web is about OCR but I'm not there yet, I still have to recognize where the letters are in the image.
any help will be appreciated
The interesting thing is that the answer is not that simple as it may seem. Some may think that locating characters on the picture is first step of OCR, but it is not the case. Actually, you won't be sure where each character is located until you actually finish with recognizing.
The way it works completely depends on the type of image you are going to recognize. First you should segment you image on text areas (blocks) and everything other.
Just few examples:
If you are recognizing license plate on car picture, you should first locate license plate, and only then split it to separate characters.
If you are recognizing some application form, you can locate areas where text is just by knowing it's layout
If you are recognizing scan of book page, you have to distinguish pictures from text areas and then work only on text.
Starting from this moment you don't need original image any more, all you need is binarized image of text block. All OCR alorithms work on binary images. You may need also doing other kind of image transformations like line straightening, perspective correction, skew correction and so on - all that again depends on type of images you are recognizing.
Once text block is found and normalized, you should go further and find lines of text on the text block. In trivial case of horisontal lines of text it is quite simple by creating pixel histogram by horisontal lines.
Now, when you have lines, you may think that now it is simple, you can split it to characters, huray! Again, it is wrong. There are such phenomena as connected characters, broken characters and even ligatures (two letters forming one single shape), or letter that have their parts go further to the right above or bellow next character. What you should do is to create several hipotesis of splitting line to words and individual characters, then try OCR every single variant, weight every hypotesis with confidence level. Last step would be checking different paths in this graph using dictionary and selecting best one.
And only now, when you actually recognized everything, you can say where individual characters are located.
So, simple answer is: recognize your image with OCR program, and get coordinates of charaters from it's output.
Generally speaking you'll be looking for small contiguous areas of nearly solid color. I would suggest sampling each pixel and building an array of nearby pixels that also fall within a threshold of the original pixels color (repeat for neighbours of each matching pixel). Put the entire array aside as a potential character (or check it now) and move on (potentially ignoring previously collected pixels for a speedup).
Optimisations are possible if you know in advance the font-size, quality and/or color of the text. If not you'll want to be fairly generous with your thresholds of what constitutes a "contiguous area".

Generate font from an image of text

Is it possible to generate a specific
set of font from the below given image
?
My idea is to generate a specific font
for the below given image of text ,by
manually selecting portion of the
image and mapping it to a set of
letter's.Generate the font for this
and then use this font to make it
readable for an OCR.Is generation of
font possible using any open-source
implementation ? Also please suggest
any good OCR's.
Abbyy FineReader 10 gets better than expected results but predictably gets confused when the characters touch.
Your problem is that the line spacing is too small. The descenders of each line overlap the character bounding boxes of the characters in the line directly below. This makes character segmentation almost impossible because the characters are touching and overlapping. The number of combinations of overlapping characters is virtually impossible to train for. The 'g' and 'y' characters are the worst offenders.
A double line spaced version of this would probably OCR reasonably well.
A custom solution that segmented and separated the each line along with a good dictionary would definitely improve the results. There would still be some errors to correct manually though. The custom routine would have to deal with the ascenders and descenders and try and segment the image into lines which can then be fed to a decent OCR engine. One way would be to analyse every character blob on the page and allocate it to a line. Leptonica (www.leptonica.com - C Imaging Library) would probably make this job a little easier.
I would not try this without increasing the resolution to 200 or 300 dpi first.
With this custom solution, training a font becomes an option if the OCR engine does a poor job initially.
Abbyy (www.abbyy.com) or Google Tesseract OCR 3.00 would be a good place to start.
No guarantees as to whether all of this will work though. This is quite a difficult page to OCR and you need to work out whether it is better to have it typed up manually overseas. It depends on the number of pages to need to process.

Resources