GLKit Doesn't draw GL_POINTS or GL_LINES - ios

I am working hard on a new iOS game that is drawn only with procedurally generated lines. All is working well, except for a few strange hiccups with drawing some primitives.
I am at a point where I need to implement text, and the characters are set up to be a series of points in an array. When I go to draw the points (which are CGPoints) some of the drawing modes are working funny.
effect.transform.modelviewMatrix = matrix;
[effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, 0, 0, &points);
glDrawArrays(GL_POINTS, 0, ccc);
I am using this code to draw from the array, and when the mode is set to GL_LINE_LOOP or GL_LINE_STRIP all works well. But if I set it to GL_POINTS, I get a gpus_ReturnGuiltyForHardwareRestert error. And if I try GL_LINES it just doesn't draw anything.
What could possibly be going on?

When you draw with GL_POINTS in ES2 or ES3, you need to specify gl_PointSize in the vertex shader or you'll get undefined behavior (ugly rendering on device at best, the crash you're seeing at worst). The vertex shader GLKBaseEffect uses doesn't do gl_PointSize, so you can't use it with GL_POINTS. You'll need to implement your own shaders. (For a starting point, try the ones in the "OpenGL Game" template you get when creating a new Xcode project, or using the Xcode Frame Debugger to look at the GLSL that GLKBaseEffect generates.)
GL_LINES should work fine as long as you're setting an appropriate width with glLineWidth() in client code.

Related

iOS OpenGL drawing lines : Not anti-aliasing

I'm trying to render a waveform in an EAGLContext View, and I can't for the life of me get it to anti-alias. Is anything clearly wrong with my OpenGL Code? Is anymore information required?
glLineWidth( 0.4f);// - pass * 1.0f);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_LINE_SMOOTH);
glColor4f(1., 1., 1., 1.);
// Set up vertex pointer,
glVertexPointer(2, GL_FLOAT, 0, oscilLine);
// and draw the line.
glDrawArrays(GL_LINE_STRIP, 0, kDefaultDrawSamples);
Is anything clearly wrong with my OpenGL Code?
glHint(GL_POINT_SMOOTH_HINT, GL_NICEST);
^^^^^^^^^^^^^^^^^^^^
Try changing this to GL_LINE_SMOOTH_HINT?
Also, since you're using the deprecated 1.x APIs you should use glGet with GL_LINE_WIDTH_RANGE and GL_LINE_WIDTH_GRANULARITY to verify that the 0.4 value is actually a supported width. You also need to ensure that the rendering target you've created has bits for an alpha channel. Can you add the context creation code?
Finally, it's not quite canon, but according to this article this mechanism of line smoothing doesn't work on iOS devices, though it apparently may on the simulator.

Good tutorial on using Quads for custom Text in OpenGL ES 2.0 on iOS

I'm currently new to OpenGL ES and am self teaching myself how to program iOS games. I'm currently playing with a project that I would like to put a HUD over with some custom text. I don't want to do this using a UILabel and currently have no idea how to use Quads to cut up a png or such full of text and attach them to normal text to be used for display. I would like the end result to be providing a simple string to a command/method and the output to be displayed using the textures/bitmap for the quad. Say glPrint("Hello World");. Would anyone be able to guide me in the proper direction? There doesn't seem to be a single good tutorial on how to do this for OpenGL ES 2.0 (just OpenGL). I also want to try to avoid using 3rd party APIs. I really need/want to understand how to tackle this.
When I was getting started with OpenGL ES for my current 2D project I used Ray's tutorial, which helped me get a handle on rendering textured 2D quads. In conjunction with his 3D OpenGL ES tutorial, you might be able to piece together what you want to do. Note that you probably wouldn't render every single quad separately like in the tutorial, as that is very inefficient. Instead, you would gather all of the vertices of the characters into two big arrays/vertex buffers and batch render the characters. The basic flow for rendering each frame would probably look like this: pass a normal perspective projection matrix for 3D rendering, get your vertex information for your 3D scene to your shaders somehow, render the 3D scene. This part you've already done. For the text, immediately after, pass an orthogonal projection matrix in, bind your font texture (generally generated earlier with the GLKTextureLoader class) to the active texture unit, generate two big arrays of texture and geometric vertices for the characters/update VBOs if the text has changed, pass that in, and then batch render all of the letters at once using either glDrawArrays or glDrawElements (which requires indices).
Also, as I'm also new at using OpenGL, some of this may be wrong/inefficient. I've yet to use OpenGL ES to render anything 3D, so I'm not sure what other state changes (enabling, disabling, etc) besides a different projection matrix might be needed between rendering your 3D scene and the 2D scene (text).
It seems that drawing text using only OpenGL is a relatively difficult and tedious task, so if you just want to render a HUD overlay displaying frame rates and other things you are much better off using UILabels and saving yourself the trouble, especially if your project is not very complex. This also prevents you from having to deal with wrapping, kerning, font sizes, colors, different languages and a load of other stuff that greatly complicates text rendering if you need anything more complex.
Rather than tracking the location of each letter, why not use Core Graphics to draw your entire string into a bitmap, then upload that as a texture? You'd just need to get the dimensions from your bitmap to know what size quad to draw for that text string.
Within my open source GPUImage framework, I have an input class called a GPUImageUIElement that does something similar. The relevant code from that input is as follows:
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
This code takes a CALayer (either directly or from the backing layer of a UIView) and renders its contents to a texture. I've already initialized the texture before this, so the code sets up a bitmap context, renders the layer into that context using -renderInContext:, and then uploads that bitmap to the texture for use in OpenGL ES.
The helper method -layerSizeInPixels just accounts for the current Retina scale factor as follows:
- (CGSize)layerSizeInPixels;
{
CGSize pointSize = layer.bounds.size;
return CGSizeMake(layer.contentsScale * pointSize.width, layer.contentsScale * pointSize.height);
}
If you used a UILabel for your view and had it autosize to fit its text, you could set the text on it, use the above to render and upload your texture, and then take the pixel size of the element to determine your quad size. However, it would probably be more efficient to just draw the text yourself using -drawAtPoint:withFont:fontForSize: or the like with an NSString.
Using Core Graphics to render your text makes it easy to manipulate the text as an NSString and use all of Core Graphics' typesetting capabilities instead of rolling your own.

glDrawArrays from iOS to OSX

I'm trying to get a game I made for iOS work in OSX. And so far I have been able to get everything working except for the drawing of some random generated hills using a glbound texture.
It works perfectly in iOS but somehow this part is the only thing not visible when the app is run in OSX. I checked all coords and color values so I'm pretty sure it has to do with OpenGL somehow.
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindTexture(GL_TEXTURE_2D, _textureSprite.texture.name);
glColor4f(_terrainColor.r,_terrainColor.g,_terrainColor.b, 1);
glVertexPointer(2, GL_FLOAT, 0, _hillVertices);
glTexCoordPointer(2, GL_FLOAT, 0, _hillTexCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)_nHillVertices);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
You're disabling the texture coordinate (and color) array along with the texturing unit, yet are binding a texture coordinate pointer.
Is this really what you intend to do?
Appearantly it was being drawn after all, only as a 1/2 pixel line. Somehow there is some scaling on the vertices in effect, will have to check my code.

OpenGL ES 2.0, drawing using multiple vertex buffers

I can't find much info on whether drawing from multiple vertex buffers is supported on opengl es 2.0 (i.e use one vertex buffer for position data and another for normal, colors etc). This page http://developer.apple.com/library/ios/#documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html and listing 9.4 in particular implies you should be able to, but I can't get it to work on my program. Code for the offending draw call:
glBindBuffer(GL_ARRAY_BUFFER, mPositionBuffer->openglID);
glVertexAttribPointer(0, 4, GL_FLOAT, 0, 16, NULL);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, mTexCoordBuffer->openglID);
glVertexAttribPointer(1, 2, GL_FLOAT, 0, 76, NULL);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBuffer->openglID);
glDrawElements(GL_TRIANGLES, 10788, GL_UNSIGNED_SHORT, NULL);
This draw call will stall or crash with EXC_BAD_ACCESS on the simulator, and gives very weird behavior on the device (opengl draws random triangles or presents previously rendered frames). No opengl call ever returns an error, and I've inspected the vertex buffers extensively and am confident they have the correct sizes and data.
Has anyone successfully rendered using multiple vertex buffers and can share their experience on why this might not be working? Any info on where to start debugging stalled/failed draw calls that don't return any error code would be greatly appreciated.
Access violations generally mean that you are trying to draw more triangles than you have allocated in a buffer. The way you've set up buffers is perfectly fine and should work, I would be checking if your parameters are set properly:
http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml
http://www.opengl.org/sdk/docs/man/xhtml/glDrawElements.xml
I think your issue is either that you've switched offset and stride in your glVertexAttribPointer calls, or you've miscounted the number of indices you're drawing
Yes, you can use multiple vertex buffer objects (VBOs) for a single draw. The OpenGL ES 2.0 spec says so in section 2.9.1.
Do you really have all those hard-coded constants in your code? Where did that 76 come from?
If you want help debugging, you need to post the code that initializes your buffers (the code that calls glGenBuffers and glBufferData). You should also post the stack trace of EXC_BAD_ACCESS.
It might also be easier to debug if you drew something simpler, like one triangle, instead of 3596 triangles.

Is it possible to use a pixel shader inside a sprite?

Is it possible to use a pixel shader inside a sprite?
I have create a simple pixel shader, that just writes red color, for
testing.
I have surrounded my Sprite.DrawImage(tex,...) call by the
effect.Begin(...), BeginPass(0), and EndPass(), End(),
but my shader seems not to be used : My texture is drawn just
normally.
I am not sure what language you are using. I will assume this is an XNA question.
Is it possible to use a pixel shader
inside a sprite?
Yes, you can load a shader file (HLSL, up to and including shader model 3 in XNA) and call spritebatch with using it.
If you post sample code it would be easier for us to see if anything isn't setup properly. However, It looks like you have things in the right order. I would check the shader code.
Your application code should look something like this:
Effect effect;
effect = Content.Load<Effect> ("customeffect"); //load "customeffect.fx"
effect.CurrentTechnique = effect.Techniques["customtechnique"];
effect.Begin();
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Begin();
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
spriteBatch.Draw(texture, Vector2.Zero, null, Color.White, 0, new Vector2(20, 20), 1, SpriteEffects.None, 0);
spriteBatch.End();
pass.End();
}
effect.End();

Resources