How to draw outline text on OpenGL ES context in iOS? - ios

I'm trying to draw text on OpenGL ES context on iOS platform. I have to draw rich-text with outline font (TTF, OTF or others)
I know two outline text drawing library.
CoreText
FreeType
There is a default text framework in iOS CoreText. I know how to use it, but only for CGContext. What should I do to draw on OpenGL ES context? Should I draw on CoreGraphics bitmap and copy it to OpenGL ES?
I have no experience about FreeType. Is it similar with CoreText?
What's the recommended way to draw outline font drawing on OpenGL ES context?
Note: My application is soft-realtime game.

There's some Apple sample code that uses Quartz 2D, rendering nicely anti-aliased text to a texture. I used it for a debug HUD in a game a couple years ago. The performance wasn't great, but if you don't have to re-render every frame that approach should work.
Sorry I can't recall the exact details of the sample code. But you'd create a bitmap context, render to that, and then make it a texture. (This should work with CoreText too.)

Related

GLPaint based OpenGL ES Blending Issue

I'm working on an app based on Apple's GLPaint sample code. I've changed the clear color to transparent black and have added an opacity slider, however when I mix colors together with a low opacity setting they don't mix the way I'm expecting. They seem to mix the way light mixes, not the way paint mixes. Here is an example of what I mean:
The "Desired Result" was obtained by using glReadPixels to render each color separately and merge it with the previous rendered image (i.e. using apple's default blending).
However, mixing each frame with the previous is too time consuming to be done on the fly, how can I get OpenGL to blend the colors properly? I've been researching online for quite a while and have yet to find a solution that works for me, please let me know if you need any other info to help!
From the looks of it, with your current setup, there is no easy solution. For what you are trying to do, you need custom shaders. Which is not possible using just GLKit.
Luckily you can mix GLKit and OpenGL ES.
My recommendation would be to:
Stop using GLKit for everything except setting up your rendering
surface with GLKView (which is tedious without GLKit).
Use an OpenGl program with custom shaders to draw to a texture that is backing an FBO.
Use a second program with custom shaders that does post processing (after drawing above texture to a quad which is then rendered to the screen).
A good starting point would be to load up the OpenGl template that comes with Xcode. And start modifying it. Be warned: If you don't understand shaders, the code here will make little sense. It draws 2 cubes, one using GLKit, and one without - using custom shaders.
References to start learning:
Intro to shaders
Rendering to a Texture
Shader Toy - This should help you experiment with your post processing frag shader.
GLEssentials example - This shows how to render to texture using OpenGL ( a bit outdated.)
Finally, if you are really serious about using OpenGL ES to it's full potential, you really should invest the time to read through OpenGL ES 2.0 programming guide. Even though it is 6 years old, it is still relevant and the only book I've found that explains all the concepts correctly.
Your "Current Result" is additive color, which is how OpenGL is supposed to work. To work like mixing paint would be substractive color. You don't have control over this with OpenGL ES 1.1, but you could write a custom fragment shader for OpenGL ES 2.0 that would do substractive color. If you are blending textures images from iOS, you need to know if the image data has been premultiplied by alpha or not, in order to do blending. OpenGL ES expects the non-premultiplied format.
You need to write that code in the function which is called on color change.
and each time you need to set BlendFunc.
CGFloat red , green, blue;
// set red, green ,blue with desire color combination
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glColor4f(red* kBrushOpacity,
green * kBrushOpacity,
blue* kBrushOpacity,
kBrushOpacity);
To do more things by using BlendFunc use this link
Please specify if it works or not. It work for me.

how to draw a string in a GLKView mixed with OpenGL

I want to be able to draw a string over the OpenGL window in my iOS app. What is the easiest way to do this? I know you can define a texture with letters and draw parts of that texture to create text - but this is quite a bit of work for what I am doing. I just want to be able to draw a simple string in the upper left of the window.
Is it possible to mix opengl with 2d drawing commands? I'm using GLKView, so I suspect it involves adding some code to drawInRect.
I am using OpenGL ES 1.1 for this.
Found a solution for this. If you add a label as subview to the GLKView then you can draw text over the opengl.

OpenGL ES IOS Texture 2D drawing upside down

I am writing a small game (it is 2d) in IOS using Opengl as a way to get comfortable with opengl. I am using the Texture2D class from the CrashLanding demo. I am using this to generate text for the score. When the text is drawn it is upside down. In the code there is comments about the texture being loaded upside down but I can not find a way to render it the correct way. Any help would be greatly appreciated.
OpenGL and your image loading code do not agree on where the origin is. OpenGL starts in the lower left hand corner, while your picture starts in the upper right hand corner. You can apply a transform to the picture in your app like the CrashLanding demo does. Or even simpler pre flip the image in an editing program such as Photoshop. This will work if your image will only be used as an OpenGL texture in this app. If you need to display this same picture elsewhere you'll need to keep a non flipped version, or figure out how to apply the transform.

Upgrading from CoreGraphics

I've written my first iOS app, Amaziograph, which uses Core Graphics.
My app is a drawing app and it employs drawing a lot of lines (Up to 30 lines one by one, at different locations + some shadow to simulate brush blur, and it needs to appear as if all lines are drawn at the same time) with CG, which I find to be slow. In fact, when I switch to Retina and try drawing just a single line with my finger, I need to wait a second or so before it is drawn.
I realised that Core Graphics no longer meets my app's requirements as I'd like to make it use the Retina display's advantages and add some photoshop-style brushes.
My question is, is there a graphics library more faster and powerful than Core Graphics, but with simple interface. All I need is drawing simple lines with size, opacity, softness and possibly with some more advanced brushes. I'm thinking of OpenGL after I saw Apple's GLPaint app, but it seems a bit complicated for me with all those framebuffers, contexts and so on. I am looking for something that has similar to CG's ideology, so it wouldn't take much time rewriting my code. Also, right now I'm doing all my drawing in UIImage views, so it would be nice to draw on top of UIImages directly.
Here is an extract of the code I'm using to draw right now:
//...Begin image contest >> draw the previous image in >> set stroke style >>
CGContextBeginPath(currentContext);
CGContextMoveToPoint(currentContext, lastPoint.x, lastPoint.y-offset);
CGContextAddLineToPoint(currentContext, currentPoint.x, currentPoint.y-offset);
CGContextStrokePath(currentContext);
//Send to an UIImage and end image contest...
You are not going to find another graphics library with better performance than Core Graphics for the iOS platforms. Most likely your application can be significantly optimised, there are many tricks to use. You might be interested in the WWDC video 506 from 2012:
http://developer.apple.com/videos/wwdc/2012/

Optimizing 2D Graphics and Animation Performance
They demonstrate a paint application using Core Graphics that works at full frame rate.

Replicating UIView drawRect in OpenGL ES

My iOS application draws into a bitmap (same size as my view) using Core Graphics. I want to push updated regions of the bitmap to the screen. (I've used the standard UIView drawRect method but I have some good reasons to switch to OpenGL).
I just want to replicate the same behavior as UIView/CALayer drawRect but in an OpenGL view. Essentially I would like to update dirty rectangles on my OpenGL view. Nothing more.
So far I've been able to create an OpenGL ES 1.1 view and push my entire bitmap on screen using a single quad (texture on a vertex array) for each update of my bitmap. Of course, this is pretty inefficient since I only need to refresh the dirty rectangle, not the whole view.
What would be the most efficient way to do that in OpenGL ES? Should I use a lattice of quads and update the texture of the quads that intersect with my dirty rectangle? (If I were to use that method, should I use VBO?) Is there a better way to do that?
FYI (just in case), I won't need rotation but will need to scale the entire OpenGL view.
UPDATE:
This method indeed works. However, there's a bug in iOS5.x on retina display devices that produces an artifact when using single buffering. The problem has been fixed in iOS6. I don't yet have a work around.
You could simply just update a part of the texture using TexSubImage, and redraw your standard full-screen quad, but with the scissor rect beeing set (glScissor) to the "dirty" part. The GL will then not draw any fragments outside this rect.
For this to work, you must of course use single buffering.

Resources